Thứ Năm, 23 tháng 8, 2018

Auto news on Youtube Aug 23 2018

Hello There Guys Welcome to my first vedio.

Today I am Going to teach you how to build a basic survival house in Minecraft

If you like this video, please leave a like and please subscribe.

To get more videos, please press the bell button.

For more infomation >> How To build a Survival House in Minecraft (easy) - Duration: 19:51.

-------------------------------------------

Build a Twelve Factor Node js App with Docker - egghead - Duration: 30:14.

the twelfth factor app is a development and deployment methodology for building

apps with present-day concepts following it will ensure your app can be deployed

easily and scaled effortlessly the website 12factor.net explains the

concepts of a twelfth factor app the lessons in this course each directly

relate to one of the twelve factors and are presented in order the first few go

over revision control and managing the configured dependencies of your NodeJS

app while the rest explain the DevOps of your app with nginx and docker with

concepts closely aligned with the 12 factor app git flow is a branching model

forget well-suited for scaling and development team let's create a new

repository new repositories are created by default with a master branch and no

commits let's go ahead and create a new README file add it to a repo and commit

it to kick things off we'll also add a remote origin from github

and push our initial commits upstream the develop branch always contains

completed features and fixes therefore it is considered the working tree for

our source repository go ahead and create a new develop ranch and push it

upstream

all new development is built within a future branch feature branches are

always branched off of the develop branch and prefixed with a feature slash

I'll work within this feature is now completed within the foo branch to your

work and commit like usual

let's create another feature this time named are keeping feature branches

separate keeps all commits related to a specific task or a feature isolated

when you are complete with your feature and is merged back into the develop

branch this is usually accomplished with a pull request so it can passed through

a proper code review by another peer but for now we will just merge this feature

back in so it's develop manually it is a good idea to do a get pull on

the develop branch often to refresh your local working tree from the remote

origin since we just created our repo there won't be any updates to pull down

you will want to rebase your branch from develop if you believe updates determine

um a remote origin could potentially cause conflicts with your changes or if

you just wanted to pull down the latest updates from develop if you check the

log of the feature slash bar branch you will see that this branch has just the

new bar.js file lets rebase from develop this will rewind your future branch

apply the updates from develop and then replay your commits on top of the last

commit we can verify this by checking the logs again and see that the new foo.js

file has been placed before our new bar.js file commit let's go ahead and

merge our feature slash Bar branch into develop

and then push it upstream it looks like our code is now ready for

release so we will create a new release branch every release branch contains the

release version in this case 1.0 since the release is in its own branch

we don't have to worry about updates from develop conflicting with this

specific release we can now push this upstream and deploy the code to a

staging environment for testing let's say our QA team did some testing and the

food text is actually supposed to be outputted twice let's update our foo.js

file we will then go ahead and commit this

directly to the release branch

working with a larger team you can also create a new feature branch and submit a

poor request to the release branch we'll push the release repository upstream and

repeat the process until a release branch has passed into a testing phase

on staging and is ready to be released to production when the updates this

release are complete the release branch has merged back into develop as we want

to make sure any updates or bug fixes that were applied in our release testing

or merge back into the working tree since the code in the release branch is

fully tested and ready to be released to production we also want to merge it into

our master branch which should only ever contain production ready code then we

tag the release we can then push the tag upstream and then use that tags to play

our code to production the only commits to master our merges

from either a release branch or hotfix branch hotfixes are intended only for

emergency fixes and follow a similar format to feature branches prefixed with

a hotfix/ Note that hotfix is branch for master we then merge the

hotfix back in the master

then tag and push out our new-release

we also want the hotfix to be applied back into our develop branch so it's not

lost with future updates

when releases are complete it's usually good practice to delete all previously

merged in branches if you are following a pull request model you can remove

these at the time of the merge or pull request acceptance otherwise you can

just manually remove these branches locally and remotely

by default NPM is not 100% deterministic even if you use NPM shrink-wrap you

cannot guarantee that what you npm install on one computer will NPM install

exactly the same on another computer you can fix this by using yarn yarn was

built to be deterministic reliable and fast new projects can get started very

easily by typing `yarn add` followed by the package you want to install you can also

specify an exact version to install or use a range or version constraint if you

prefer a different install criteria after installing the first package yarn

will create a yarn.lock file we can check the exact pin down version of this

package by searching for our installed package in this file

Yarn also does file checks on matches to ensure exact one-to-one download

results regarding version control be sure to add both the package.json

file and the yarn.lock file to your repository you'll also want to ignore

the node modules directory as these at sets are compiled when the yarn command

is ran later at Build time or post deploy

let's create a new project named foo in there let's create a simple NodeJS

script that connects to a locally running MongoDB instance

we'll install the MongoDB module with yarn

our script with node we can see that we can successfully connect to our MongoDB

instance the current configuration doesn't allow

for the connection string to be changed on the fly

moving configuration values into environment variables as many benefits

let's go ahead and copy the connection string and replace it with a reference

to our environment variable environment variables are referenced by process.env

followed by the name of our environment variable and caps in this case we'll

name it MONGO_URI let's save our file

we'll start the node script again but this time prepend the command with our

new environment variable

we can see that we can still connect to the database but our connection string

is move out of code and into the environment if you have many environment

variables or run an easier way to start your script you can create a new .env

file and move your environment variables there

to inject our external environment file into a node.js script we'll need a

package called `dotenv` let's add it with yarn

next we'll require this module at the top of our script immediately invoking

the config function this function will automatically look for a file named .env

and inject the values contained within this file into the environment let's go

back and start our node process directly now and see that we are still connecting

to the MongoDB instance from the values within our environment file we have an

Express app that serves up a simple request of an image of our cat friend

Herman let's set up a proxy for our /images path to route request

through a content delivery network depending on a specified URL our first

step is to require the express-http-proxy module which is Express middleware

to simply proxy requests then we will set up our base image URL variable to an

environment variable with the same name let's create a new proxyBaseImageURL

variable we want to check if the BASE_IMAGE_URL environment variable exists

and trigger which middleware to use based on that condition

let's move up our Express static middleware to be used as a fallback in

the event our environment variables not to find we'll also now update our

middleware to use the value return on the proxyBaseImageUrl variable let's

build our proxy condition the first parameter the proxy middleware accepts

is the URL of where we want a privacy request in this case the value of our

base image URL environment variable the second parameter is an options object

within this option let's define a proxy request path resolver this is a function

where we can define a custom path for our image request it expects the return

a simple string or a promise the function contains the request object as

its first parameter let's set the new path variable to equal the base image

URL concatenate with the requested path will also log this path to the console

so we know when requests are being proxied and where they are being proxy to

finally we will return this new path let's save this file as it's now ready

to practice you request before starting a web server make sure to install the

express-http-proxy module let's start our NodeJS script

but first prepend the URL of the CDN to the new environment variable named base

image URL in this case I am storing Herman within

a Google cloud storage bucket now when we start our app we can see from the

console log output that requests are being properly proxied through to our

CDN once you have written the docker file you can create an executable bundle

of your app with the docker build command this will pull in your app

dependencies and compiled binaries and assets into a process that can later be

ran as a tagged release type docker build and a -t what follows is the

repo name for the build which is typically the vendor name followed by a

slash than the app name that side it can also be any arbitrary value after the

repo name is a colon followed by the tagger version number if you leave this

part out it will default to latest but this is rarely desired in a production

build you want to tag every release with a specific unique version number here we

will use the tag 1.0 then you want a space followed by the directory you want

to build from so we will use a period to specify the current directory

it may take a few moments to run the build process depending on the size of

your base image install time of dependencies and asset size and number

of files being copied to the bundle after the bundle has been built we will

use docker compose to run or build create a file named docker-compose.yml

this will contain your release configuration

here's a simple config for our node.js app the most important takeaway here is

the image property which should reference the specific tag of the image

we just built this config file when combined with our tag build creates our

release note that we can also define other aspects relating to this release

in our config file such as setting environment variables after saving this

file we should use version control to commit this release before committing

ensure you have both docker ignore and dot git ignore files that are set to

ignore the .env file in the node modules directory let's add all the

files to get

then commit the release

after committing we should tag our release with get tagged and use an

incrementing version number in this case v1.0.0

to run this release for production we first check out the tag with get check

out v1.0.0 we create or modify the .env files who

are liking on production this file should not be committed to version

control as it often contains security credentials to third-party services

let's then kick off the production release by running it with

docker-compose up -d app this will pull down any docker images that don't exist on

the machine apply the compose configuration and start your containers

and daemon mode we can confirm the containers are running by typing docker ps

and follow the log output by running docker logs -f foo_app_1

upon opening a web browser to the URL localhost:8080 we can

see our cat friend Herman if you desire you can further optimize the tag

checkout and production deploy process by creating custom bash scripts to

further automate the process of releasing your software docker

containers should be designed to be stateless meaning that they can survive

system reboots and container terminations gracefully and without the

loss of data designing with stateless containers in mine will also help your

app grow and make future horizontal scaling tasks trivial let's review our

setup here we have a small express app that uploads files to the local file

system note that the base route serves up an

HTML form with file inputs in the upload route handles moving our uploaded file

into a folder on the file system named uploads the Teicher file for this app is

simple we are we are simply setting up our current working directory copying

over assets making an uploads directory running Yarn to install prerequisites

exposing port 8080 and then starting a webserver

we are using docker-compose to run our containers for our main app service we

will simply build our app from the local directories docker file and also bind

port 8080 from the app to the host we will start our app up with

docker-compose up

let's test out the file upload functionally by uploading an image of

our cat friend

we can see that Herman is successfully uploaded to the uploads folder but we

have a small problem here let's stop our app remove our containers and star our

app back up then let's refresh our browser window

Herman did in fact died when our container died this presents two

problems one being that we will lose all our uploaded content in it if a

container is killed off the other being that no one likes dead keys

this shows that Dockers file system is ephemeral and that we need to design

systems that can persist container terminations our entire fleet of

containers should be able to kill off at any given moment and redeploy at anytime

without losing any data the easiest way to fix this problem is to set up a persistent volume

Persistent volumes map a host directory into a container directory

so that even when containers die the volume does not the directory

will remap back to a host when the new containers are deployed setting up

volumes is really easy with compose let's open up our docker-compose.yml

file then add a volumes property under this property we will simply choose any

arbitrary name for our persistent volume let's name this app data make sure to

suffix the name with a colon next let's go into our app servers and add a

volumes property since there can be many volumes set up we prefix our volumes

entries with a dash then we specify the name of the volume we want to use in

this case app data let's proceed that with the colon and then specify a folder

to be persistent in this case /src/uploads when the container

starts this volume path will be mounted from our volume into the container at

this directory this is enough to persist data between container deletions and

responds let's remove our current app containers to ensure we are running from

a clean state then start our app up again with compose

we will reload our cat friend Herman

and follow the link to ensure he is uploaded now let's stop our app and then

completely kill off our containers and Herman again with docker-compose rm -f

at this point in time Herman is officially a Schrodinger's cat because

he is currently both dead in our containers but also alive and well

within our boxed volume let's take him out of the box and start our app back up

with docker-compose up if we refresh our browser we can see

that Herman is alive and well let's take a look at a simple node.js

app how we are doing is responding hello world to every request and then starting

the web server and port 8000 our docker file is simple as well just copying over

the index.js file and starting node let's build this into a docker image

named foo

and then run it with docker run -d foo if we try to access this address and port

with curl we will get a connection refused failure a response this is

because docker locks down all ports by default just like a firewall we need to

open the container to port 8000 to make it publicly accessible let's start up

another container but this time with also defining a -p flag this is how we

tell docker which port we want to access on the container and where on the host

we want to access it the host definition comes first followed by the container

port

if we try to access this again with coral we will get our hello world

response note that we can also map the container

port to a different port on the host this is useful when running multiple

docker images at the same time that all run on the same port

if using compose you could do the same thing by defining the ports option it

accepts multiple values following the same format as the -p flag with the

host port coming first followed by the container port let's test this out by

starting our app with compose

then testing this new 8010 port with curl

let's make a directory called NodeJS that contains our app files

within that we will create a simple NodeJS app that responds hello world

from server followed by the name of our server which we will define from an

environment variable let's then create a docker file that simply kicks off our

node.js process and then we will build our app image

with docker build -t app-nodejs .

let's start 2 node.js processes will start the first server with a server

name of chicken and a name we can reference later chicken

we'll do something similar with our second server but with the name steak

for both

note that nginx will be handling our external requests so we do not need to

buy any ports of the app containers back to the host our containers will only be

accessible from other docker containers since these containers will not be

directly accessible by a public network this will also add an additional level

security to our app let's create a new nginx directory in the root of our

project and enter in this directory we will create a new

file contains our nginx configuration named nginx.conf the purpose of our

nginx process is to load balanced request to our node.js processes let's

create a server{} block with a location{} block underneath preceded with a slash

within this block define a proxy_pass directive followed by

http:// followed by any arbitrary value

we'll use "app" here followed by a semicolon what we're telling nginx to do

here is to listen at the root path of our web server and pass our request

through a proxy named app let's go ahead and define that proxy create a new

upstream{} block followed by the name of our proxy "app" next we will follow it

with a line started with server followed by the name of our first server "chicken"

and the default port 8000 we will repeat the line again but this time with the

"steak" server

the upstream block we define here tells nginx which servers proxy requests to

we can define as many lines here as we want nginx will treat request to find

within this group with a round robin balancing method and you can even define

weights on the servers with the weight option

next let's create an nginx docker image that just copies over our nginx.conf

file so the default configuration file location

let's build this image and name it "app-nginx"

the final step is to start our nginx container and map port 8080 on the host

port 80 on the container we will then use the link directive to link our

chicken and steak servers making them accessible within the container

if we use curl to hit our nginx server on port 8080 we will see that

nginx properly routing requests to our chicken and steak NodeJS servers in

a round robin fashion. We're going to start by creating a script that outputs

hello world and then kills itself after 3 seconds

we will create a docker file which simply adds a script to our image and

executes it let's build this into an image named

"helloworld" then run this with a standard "docker run"

command with the name of "test1"

if we check the status with "docker ps" and filtered to show only our "test1" image

we can see that the container is exited and is no longer running

let's try starting another container but this time with the --restart flag

and pass in the value "always" this will restart the container whenever it exits to

ensure it's always running let's name this container "test2"

if we go ahead and inspect it now we will see that the container is still running

another way to ensure high availability

with their containers is to use docker-compose create a docker-compose.yml file

and we will define a simple configuration for our helloworld app

make sure to add a "restart" flag with the value of "always" to your configuration

next we can easily scale this app with "docker-compose up" followed by the --scale flag

let's tell compose a start helloworld

with three instances

we will see that there are still three containers available and always restarting

we define the app servers we want to use within our docker-compose.yml file

docker makes it easy to run consistent

environments between dev stage and prod because the images and versions we

define in this yaml file are the exact images and versions we use on staging

and production we can assure we have 100% parity between environments and

that differences between environments will never exist. Third party backing

services are typically configured with environment variables. This allows the

use of the exact same codebase between environments and requests are directed

to the appropriate backing service depending on the value of the

environment variables. Let's take an example of Google Analytics say you want

to test the functionality of your Analytics code in a local environment

but don't want it to affect prod you would simply create a new Google

Analytics account and use the idea that test account within an environment

variable then you define the environment variable within your .env file

take the code snippet supplied from your Google Analytics account then substitute

where it calls a reference to your analytics account number with a

reference to your environment variable when you start your docker container

your environment variables will be set from your .env file meaning that the test

analytics ID will be applied here when you deploy code through staging and

production you can use our different related Analytics IDs that are defined

within each specific environment .env file since that .env file should not be

committed to version control systems you should most likely create the process of

adding and changing environment variables to your deployment process

this way you could be assured that the correct environment variables are set

upon deployments to the related environment. Anything written to append

only files within containers should be piped to stdout this makes it easy

to diagnose log files and then sure that impend only files don't consume disk

space some widely used docker images such as

nginx already write to stdout however

sometimes you have an application-specific log that you want

to apply the same methodology to say we have a simple app that overrides

the console.log function of node console.log already writes to

stdout so this is a really nonsensical example but we will use this

to get our point across on how to redirect rights from log files to stdout

in this case we will write all calls to a console.log to a file

named debug.log if we run this script for a few moments and then check the

output of debug.log we will see that the script is running and correctly writing

to debug.log there's a pretty simple way we can

accomplish this change within our docker file. We will add a new instruction

before our command instruction, that is a RUN. what we will do here is use "ln -sf"

to create a symlink to /dev/stdout from our debug.log file

this simple line will take whatever would normally be written to our

debug.log file and pipe it right to stdout

let's build our image

and then run it in daemon mode

if we use "docker logs" with a "-f" function to tail the output of our

container we can see that the output normally written to our debug.log

file is definitely piping to stdout. we can also confirm that no output

is further writing to debug.log within our container, so we don't have to

worry about this container accumulating to space over time. docker containers are

cheap meaning that they take very little disk space in memory to run and they

could be killed off and started very quickly and easily. This disposable

nature makes it very easy to run isolated containers that can execute

arbitrary tasks. Let's use "docker run" with the "-it" flag and the "--rm" flag

we will run the ubuntu to instance and drop in a bash we are now at the command

prompt within the ubuntu operating system and can run any command that is

available within Ubuntu let's exit the prompt if we check the

status of all containers we can see that we don't have any the "--rm" flag killed

the container 1 we exited the process we can also skip the bash prompt than

execute commands directly docker just started the ubuntu container

ran our "ls" command displayed the output to stdout exhibited ubuntu - and

then killed off the container all in less than a second

For more infomation >> Build a Twelve Factor Node js App with Docker - egghead - Duration: 30:14.

-------------------------------------------

How to Build a Revenue Focused Website Without Touching A Line of Code - Preview Video - Duration: 3:22.

So, let's talk about why you need a website in the first place.

I want you to think about your main office.

The office from where you work.

Maybe you are working from your home,

Maybe you have your own studio.

The point is,

You have invested in some Real Estate.

You have posters of quotes that inspire you on the wall.

Quotes about the importance of hustle, dedication, work ethic and exercise.

You have a flowerpot in the corner and,

You have a flowerpot in the corner and,

a brochure about your services lying on the couch.

People who come to your office,

they look at the quotes and understand

stand for and what you don't stand for.

They look at your brochure and they understand

what services you offer and

what services you offer and

how it is beneficial for them.

Not just that,

When people visit your office,

they somehow feel the elegance and professionalism and

the overall feeling your office gives them.

This is not something they can explain to somebody

But at the same time,

It is something that is influencing them in some way.

It is something that is influencing them in some way.

You know this.

That's why you have designed your office in a very specific way.

You know that people who come to your office are there because

they are interested in what you offer.

You want to convert them from browsers to long-time customers.

You know that your office is

the first impression of your business.

That's why you have optimized your office according to it.

Think of a website as your online office.

People can visit here when they are interested in what you have to offer.

They will instantly know what you stand for and what you dont stand for.

They will instantly feel that this guy or girl is a professional.

Within the 5 seconds of opening your website,

they (your website visitors) will know

What your Business is about.

What they are going to get from you.

What to do next.

like,

Click on the button to watch a free webinar.

Or get on a free consultation call.

Or try a 7 day membership.

You cannot achieve this without having

a well thought out,

revenue-focused website.

Your Brand, Influence and by extension,

revenue

from online sales depend on it.

I will be building a revenue-focused website and blog

in front of you from scratch

in front of you from scratch

by the way,

I am not a coder.

This is what my team and I have done for our business.

This is what we have done for our customers' businesses

This is what I am going to show you how to do, Step by Step.

The intention behind this course is

to build the Ultimate Website Development Training for you.

to build the Ultimate Website Development Training for you.

Now you might be saying,

Okay I got it.

The Website is important.

But what is the difference between

a Website and a Blog?

How are they different?

Which one should I build?

Let's see the difference between

Let's see the difference between a website and a blog so that,

you can decide what you should do

for your business.

Không có nhận xét nào:

Đăng nhận xét