AWS Cognito-idp function - Python

These are the available methods:
add_custom_attributes()
admin_add_user_to_group()
admin_confirm_sign_up()
admin_create_user()
admin_delete_user()
admin_delete_user_attributes()
admin_disable_provider_for_user()
admin_disable_user()
admin_enable_user()
admin_forget_device()
admin_get_device()
admin_get_user()
admin_initiate_auth()
admin_link_provider_for_user()
admin_list_devices()
admin_list_groups_for_user()
admin_list_user_auth_events()
admin_remove_user_from_group()
admin_reset_user_password()
admin_respond_to_auth_challenge()
admin_set_user_mfa_preference()
admin_set_user_password()
admin_set_user_settings()
admin_update_auth_event_feedback()
admin_update_device_status()
admin_update_user_attributes()
admin_user_global_sign_out()
associate_software_token()
can_paginate()
change_password()
confirm_device()
confirm_forgot_password()
confirm_sign_up()
create_group()
create_identity_provider()
create_resource_server()
create_user_import_job()
create_user_pool()
create_user_pool_client()
create_user_pool_domain()
delete_group()
delete_identity_provider()
delete_resource_server()
delete_user()
delete_user_attributes()
delete_user_pool()
delete_user_pool_client()
delete_user_pool_domain()
describe_identity_provider()
describe_resource_server()
describe_risk_configuration()
describe_user_import_job()
describe_user_pool()
describe_user_pool_client()
describe_user_pool_domain()
forget_device()
forgot_password()
get_csv_header()
get_device()
get_group()
get_identity_provider_by_identifier()
get_paginator()
get_signing_certificate()
get_ui_customization()
get_user()
get_user_attribute_verification_code()
get_user_pool_mfa_config()
get_waiter()
global_sign_out()
initiate_auth()
list_devices()
list_groups()
list_identity_providers()
list_resource_servers()
list_tags_for_resource()
list_user_import_jobs()
list_user_pool_clients()
list_user_pools()
list_users()
list_users_in_group()
resend_confirmation_code()
respond_to_auth_challenge()
revoke_token()
set_risk_configuration()
set_ui_customization()
set_user_mfa_preference()
set_user_pool_mfa_config()
set_user_settings()
sign_up()
start_user_import_job()
stop_user_import_job()
tag_resource()
untag_resource()
update_auth_event_feedback()
update_device_status()
update_group()
update_identity_provider()
update_resource_server()
update_user_attributes()
update_user_pool()
update_user_pool_client()
update_user_pool_domain()
verify_software_token()
verify_user_attribute()
View More...

How to collect and index nginx log using filebeat and elasticsearch

# How to collect and index nginx log using filebeat and elasticsearch

## What is Elasticsearch?

- Elasticsearch helps in indexing the data read from Logstash. Its a full text search engine.

- It provides tools to query.

- Access and aggregate the data using the API's.

- This tool is based on the Apache Lucene search engine.

## What is Kibana?

- Its used to read/query data from elasticsearch indices using its API's.

- Also we use kibana to visualise and generates graph and charts for the data that is indexed.

## What are Beats?

- These are lightweight and are installed as agents.

- They read data, parses it and ships it to eighter elasticsearch or logstash Metricsbeat.

- Filebeat and Packetbeat are some some of the beats available.

- 'libbeat' is the library which can be used to write custom beat.

# Procedure:

## Step1: Pull nginx, elasticsearch, filebeat and kibana docker images

```sh

docker pull nginx

docker pull elasticsearch:7.14.2

docker pull elastic/filebeat:7.14.2

docker pull elastic/kibana:7.14.2

```

## Step2: Create elasticsearch container and allocate port 9200

```sh

docker run -d --name elasticsearch -p 9200:9200 -p 9300:9300 -e "discovery.type=single-node" elasticsearch:7.14.2

```

## Step3: Create kibana container , allocate port 5601 and link to elasticsearch

```sh

docker run -d --name kibana --link elasticsearch -p 5601:5601 elastic/kibana:7.14.2

```

## Step4: Run the Filebeat setup, Running Filebeat with the setup command will create the index pattern and load visualizations , dashboards, and machine learning jobs. Run this command:

```sh

docker run -it --link elasticsearch \

--link kibana elastic/filebeat:7.14.2 \

setup -E setup.kibana.host=kibana:5601 \

-E output.elasticsearch.hosts=["elasticsearch:9200"]

```

## Step5: Download this example configuration file as a starting point and current directory

```sh

curl -L -O https://raw.githubusercontent.com/elastic/beats/7.7/deploy/docker/filebeat.docker.yml

```

## Step6: Create filebeat container and link to elasticsearch and kiban

```sh

docker run -d \

--link elasticsearch \

--link kibana \

--name=filebeat \

--user=root \

--volume="$(pwd)/filebeat.docker.yml:/usr/share/filebeat/filebeat.yml:ro" \

--volume="/var/lib/docker/containers:/var/lib/docker/containers:ro" \

--volume="/var/run/docker.sock:/var/run/docker.sock:ro" \

elastic/filebeat:7.14.2 filebeat -e -strict.perms=false \

-E output.elasticsearch.hosts=["elasticsearch:9200"]

```

## Step7: Use self created conf.d/app.conf and nginx.conf for setup nginx server

#### #sampe code of app.conf which is merge and update

```sh

server{

listen 80;

access_log /var/log/nginx/access.log main;

#server_name <your-site-name>;

location / {

#include proxy_params;

proxy_pass http://127.0.0.1:7777;

}

}

```

#### Sampe code of nginx.conf which is merge and update

```sh

user nginx;

worker_processes auto;

error_log /var/log/nginx/error.log notice;

pid /var/run/nginx.pid;

events {

worker_connections 1024;

}

http {

include /etc/nginx/mime.types;

default_type application/octet-stream;

log_format main escape=json '$remote_addr - $remote_user [$time_local] "$request" '

'$status $body_bytes_sent "$http_referer" '

'"$http_user_agent" "$http_x_forwarded_for" "$request_body" '

'"$request_time" "$http_referrer" "$query_string" '

'"$request_uri" "$uri" "$arg_name" '

'"$cookie_name" "$document_root" "$document_uri"" '

'"$proxy_host" "$proxy_port" "$proxy_protocol_addr" '

'"$request_filename" "$request_body_file" "$cookie_bar" "$sent_http_set_cookie"';

access_log /var/log/nginx/access.log main;

sendfile on;

#tcp_nopush on;

keepalive_timeout 65;

#gzip on;

include /etc/nginx/conf.d/*.conf;

}

```

## Step8: Create nginx container and link to elasticsearch and kibana

```sh

docker run \

-v "$PWD"/conf.d:/etc/nginx/conf.d \

-v "$PWD"/nginx.conf:/etc/nginx/nginx.conf \

--label co.elastic.logs/module=nginx \

--label co.elastic.logs/fileset.stdout=access \

--label co.elastic.logs/fileset.stderr=error \

--label co.elastic.metrics/module=nginx \

--label co.elastic.metrics/metricsets=status \

--label co.elastic.metrics/hosts='${data.host}:${data.port}' \

--detach=true \

--name web-nginx \

--net host \

-p 80:80 \

nginx

```

View More...

How can transfer nginx access log to elasicsearch using filebeat - Docker

https://www.elastic.co/guide/en/kibana/7.7/docker.html

https://www.elastic.co/guide/en/beats/filebeat/7.7/running-on-docker.html

Step1: pull nginx, elasticsearch, filebeat and kibana image

docker pull nginx

docker pull elasticsearch:7.14.2

docker pull elastic/filebeat:7.14.2

docker pull elastic/kibana:7.14.2

step2: create elasticsearch container and allocate port 9200

>

docker run -d --name elasticsearch -p 9200:9200 -p 9300:9300 -e "discovery.type=single-node" elasticsearch:7.14.2

step3: create kibana container , allocate port 5601 and link to elasticsearch

>

docker run -d --name kibana --link elasticsearch -p 5601:5601 elastic/kibana:7.14.2

step4: Run the Filebeat setup, Running Filebeat with the setup command will create the index pattern and load visualizations , dashboards, and machine learning jobs. Run this command:

>

docker run -it --link elasticsearch \

--link kibana elastic/filebeat:7.14.2 \

setup -E setup.kibana.host=kibana:5601 \

-E output.elasticsearch.hosts=["elasticsearch:9200"]

step5: Download this example configuration file as a starting point and current directory

>

curl -L -O https://raw.githubusercontent.com/elastic/beats/7.7/deploy/docker/filebeat.docker.yml

step7: create nginx container and port any for tempory copy conf.d and nginx.conf in current directory then remove container

>

docker run -it -d -p 8080:80 --name web nginx

docker cp web:/etc/nginx/nginx.conf .

docker cp web:/etc/nginx/conf.d .

docker stop web

docker rm web

step8: create app.conf in conf.d and write code like

server{

listen 80;

access_log /var/log/nginx/access.log main;

#server_name <your-site-name>;

location / {

#include proxy_params;

proxy_pass http://127.0.0.1:7777;

}

}

step9: create filebeat container and link to elasticsearch and kibana

>

docker run \

-v "$PWD"/conf.d:/etc/nginx/conf.d \

-v "$PWD"/nginx.conf:/etc/nginx/nginx.conf \

--label co.elastic.logs/module=nginx \

--label co.elastic.logs/fileset.stdout=access \

--label co.elastic.logs/fileset.stderr=error \

--label co.elastic.metrics/module=nginx \

--label co.elastic.metrics/metricsets=status \

--label co.elastic.metrics/hosts='${data.host}:${data.port}' \

--detach=true \

--name web-nginx \

--net host \

-p 80:80 \

nginx

View More...

Applications of Machine learning

1. Image Recognition:
2. Speech Recognition
3. Traffic prediction:
4. Product recommendations:
5. Self-driving cars:
6. Email Spam and Malware Filtering:
7. Virtual Personal Assistant:
8. Online Fraud Detection:
9. Stock Market trading:
10. Medical Diagnosis:
11. Automatic Language Translation:
View More...

First Previous 1 2 3 4 Next Last