Scrapy is controlled byjust
command line utility, herein referred to as the "Scrapy Utility" to distinguish it from its subcommands, which we refer to simply as "Commands" or "Scrapy Commands".
The Scrapy tool offers several commands, for multiple purposes, each accepting a different set of arguments and options.
(Ofjust Apply
The command was removed in 1.0 in favor of thestandalonescraper tool
. To seeimplement your project.)
configuration settings¶
Scrapy fetch ini style configuration parametersscrapy.cfg
files in default locations:
/etc/scrapy.cfg
ofc:\scrapy\scrapy.cfg
(system wide),~/.config/scrapy.cfg
($XDG_CONFIG_HOME
) In~/.scrapy.cfg
($ HOME
) for global (user-level) settings andscrapy.cfg
in the root folder of a Scrapy project (see the next section).
Settings from these files are combined in the order of preference shown: user-defined values override system-wide defaults, and project-wide settings override all others, if set.
Scrapy also understands and can be configured through a number of environment variables. Currently these are:
SCRAPY_SETTINGS_MODULE
(to seepoint to settings)SCRAPY_PROJECT
(to seeShare root folder between projects)SCRAPY_PYTHON_SHELL
(to seescrape the shell)
Default Scrapy project structure¶
Before we get into the command line utility and its subcommands, let's first understand the directory structure of a Scrapy project.
Although it can change, all Scrapy projects have the same default file structure, similar to this:
scrapy.cfgmyproject/ __init__.py items.py middlewares.py pipelines.py settings.py spiders/ __init__.py spider1.py spider2.py ...
The folder where thescrapy.cfg
the located file is known as theprojectroot-mapa. This file contains the name of the python module that defines the project settings. Here's an example:
[institutions]role model = myproject.settings
Share root folder between projects¶
A project root directory, the one that contains thescrapy.cfg
, it can be shared by multiple Scrapy projects, each with its own configuration section.
In this case, you must define one or more aliases for the following configuration sections[institutions]
for youscrapy.cfg
duration:
[institutions]role model = myproject1.settingsProject 1 = myproject1.settingsproject 2 = myproject2.settings
The default is thejust
The command line utility uses therole model
institutions use itSCRAPY_PROJECT
environment variable to specify a different projectjust
I'm using:
$ configuration scrapy --get BOT_NAMEProject 1 Bot$ export SCRAPY_PROJECT=project2$ configuration scrapy --get BOT_NAMEProject 2 Bot
The habitsjust
tool¶
You can start by running the Scrapy tool with no arguments and it will print out usage help and available commands:
Scrapy X.Y - no active project Usage: scrapy[options] [arguments] Available commands: crawl Run spider recovery Get URL using Scrapy downloader[...]
The first line will print the currently active project if you are in a Scrapy project. In this example, it was run outside of a project. If run from a project, it would have printed something like this:
Scrapy X.Y - project: myprojectUsage: scrapy[options] [arguments][...]
create projects¶
The first thing you usually do with thejust
tool is to create your Scrapy project:
scrapy start project myproject [project_dir]
This will create a Scrapy project under theproject_dir
map.Alsproject_dir
Not specified,project_dir
will be the same asMy project
.
Then go to the new project folder:
cd project_dir
And you are readyjust
command to manage and monitor your project from there.
Project Control¶
You use itjust
tool for your projects to track and manage them.
For example, to create a new spider:
scrapy genspider mi dominio dominio.com
Some Scrapy commands (likeSlow march) must be executed by a Scrapy project. See thistask reportbelow for more information on which commands should and should not run projects.
Also note that some commands may behave slightly differently when run from projects. For example, the fetch command uses behaviors that are overridden by spiders (such as theuser agent
attribute to override the user agent) if the retrieved URL is associated with a specific spider. This is intentional, as theTo recover
The command is meant to control how spiders download pages.
Available tool commands¶
This section contains a list of the available built-in commands with a description and some usage examples. Remember, you can always get more information about any command by running:
just-he
And you can see all available commands with:
scrapy -η
There are two types of commands, commands that work only from within a Scrapy project (project-specific commands) and commands that also work without an active Scrapy project (global commands), although they may behave slightly differently when executed from within a project (as they did). use overridden project settings).
Global commands:
startup project
genetic spider
institutions
flowing rotation
shell
To recover
Show
version
Project-only commands:
Slow march
bill
list
processing
analyze
banco
startup project¶
Pension:
just startup project
[project_dir] Required project:Gender
Create a new Scrapy project calledProject's name
, lowproject_dir
map.Alsproject_dir
Not specified,project_dir
will be the same asProject's name
.
Usage example:
$ scrapy start project myproject
genetic spider¶
Pension:
just genetic spider [-T model to follow]
of URL> Required project:Gender
(Video) SOLVED : ERROR: Could not find a version that satisfies the requirement python-opencv
New in version 2.6.0:The ability to pass a URL instead of a domain.
Create a new spider in the current folder or current projectsBe crazy
folder, if called by a project. He
The parameter is set to that of rotation.name
, while
used forallowed_domains
Instart_url
Spider features.
Usage example:
$ scrapy genspider -l Available templates: basecrawl csvfeed xmlfeed$ scrapy genspider example.com Spider created 'example' with template 'base' $ scrapy genspider -t crawl scrapyorg scrapy.org Spider created 'scrapyorg' with template 'crawl'
This is just a useful shortcut command for creating spiders from predefined templates, but it is by no means the only way to create spiders. You can create the spider source files yourself instead of using this command.
Slow march¶
Pension:
just Slow march
Required project:Y
Begins to crawl with the help of a spider.
Supported Options:
-HE, --personal
: Show help message and exit-ONE NAME=VALUE
: set a spider argument (can be repeated)--export DURATION
of-HE DURATION
: add dashed elements to the end of FILE (use - for stdout), to set the format, set a colon to the end of the output URI (ie.-HE FILE FORMAT
)--replace-exit DURATION
of-HE DURATION
: Put scraped items in FILE, overwrite any existing files, to define the format, set a colon at the end of the output URI (ie.-HE FILE FORMAT
)--Output format FORM
of-T FORM
: The legacy way of specifying the format to use for dropping objects does not work-HE
Examples of use:
$ scrapy crawl myspider[ ... myspider starts crawling ... ]$ scrapy crawl -o myfile:csv myspider[ ... myspider starts crawling and adds the result to myfile in csv format ... ] $ scrapy crawl - O myfile :json myspider[ ... myspider starts crawling and saves the result to myfile in json format overwriting the original content... ]$ scrapy crawl -o myfile -t csv myspider[ .. myspider starts crawling and adds the result in myspider file in csv format...]
bill¶
Pension:
just bill [-grande]
Required project:Y
Perform contract audits.
Examples of use:
$ scrapy check -lfirst_spider * parse * parse_itemsecond_spider * parse * parse_item$ scrapy check[FAILED] first_spider:parse_item>>> Falta el campo "RetailPricex"[FAILED] first_spider:parse>>> 92 solicitudes devueltas, 4 pendientes.
list¶
Pension:
just list
Required project:Y
List all spiders available in the current project. The output is one spider per line.
Usage example:
$ scrapy listaraña1araña2
processing¶
Pension:
just processing
Required project:Y
Edit the specified spider using the editor defined inAUTHOR
environment variable or (if not set) hAUTHORinstitution.
This command is only provided as a useful shortcut for the most common case, the developer is of course free to choose a tool or IDE for writing and debugging spiders.
Usage example:
$ scrapy editar spider1
To recover¶
Pension:
just To recover
Required project:Gender
(Video) How to Easily Install Anaconda (Python) on Windows/Mac (M1) 2022
Downloads the specified URL using the Scrapy downloader and writes the content to stdout.
The interesting thing about this command is that it retrieves the page just as the spider would have downloaded it. For example, if the spider has aUSER AGENT
attribute that the user agent overrides, it will use this.
So this command can be used to "see" how your spider would render a particular page.
When used outside of the project, no spider-specific behavior is applied and only the default Scrapy downloader settings are used.
Supported Options:
--spider=SPIDER
– Override automatic spider detection and force the use of a specific spider-- heads
: Print HTTP response headers instead of the response body--no redirect
: don't follow HTTP 3xx redirects (default is to follow them)
Examples of use:
$ scrapy fetch --nolog http://www.example.com/some/page.html[ ... html-inhoud hier ... ]$ scrapy fetch --nolog --headers http://www.example. com /{'Accept-Ranges': ['bytes'], 'Age': ['1263'], 'Connection': ['close'], 'Content-Length': ['596'], 'Content- Write ': ['tekst/html; charset=UTF-8'], 'Date': ['Wed, 18 Aug 2010 23:59:46 GMT'], 'Etag': ['"573c1-254-48c9c87349680"'], 'Last Modified' :[ 'Vrij Jul 30 2010 15:30:18 GMT'], 'Server': ['Apache/2.2.3 (CentOS)']}
Show¶
Pension:
just Show
Required project:Gender
Opens the specified URL in a browser as the Scrapy spider would "see" it. Spiders sometimes see pages differently than normal users, so this can be used to verify what the spider "sees" and confirm that it is what it expects.
Supported Options:
--spider=SPIDER
– Override automatic spider detection and force the use of a specific spider--no redirect
: don't follow HTTP 3xx redirects (default is to follow them)
Usage example:
$ scrapy view http://www.example.com/some/page.html[ ... launch browser ... ]
shell¶
Pension:
just shell [address]
Required project:Gender
Launch the Scrapy shell for the specified URL (if provided), or empty it if no URL is specified. It also supports local UNIX-style file paths relative to./
of../
prefixes or absolute file paths.scrape the shellFor more information.
Supported Options:
--spider=SPIDER
– Override automatic spider detection and force the use of a specific spider-DO code
: evaluate the code in the shell, print the result and exit--no redirect
: Don't follow HTTP 3xx redirects (default is to follow them). This only affects the URL that you can pass as an argument on the command line. once you're in the shell,search (url)
it still follows the standard HTTP redirects.
Usage example:
$ scrapy shell http://www.example.com/some/page.html[ ... scrapy shell starts ... ]$ scrapy shell --nolog http://www.example.com/ -c '(response .status, answer.url)'(200, 'http://www.example.com/')# shell follows standard HTTP redirects $ scrapy shell --nolog http://httpbin.org/redirect-to?url = http%3A%2F%2Fexample.com%2F -c '(response.status, response.url)'(200, 'http://example.com/')# you can turn it off with --no-redirect # ( only for the URL passed as a command line argument)$ scrapy shell --no-redirect --nolog http://httpbin.org/redirect-to?url=http%3A%2F%2Fexample.com%2F - c ' ( response.status, response.url)'(302, 'http://httpbin.org/redirect-to?url=http%3A%2F%2Fexample.com%2F')
analyze¶
Pension:
just analyze
[options] Required project:Y
Retrieves the specified URL and parses it with the spider that handles it, using the method passed with--Call back
choice oranalyze
si no se da
Supported Options:
--spider=SPIDER
– Override automatic spider detection and force the use of a specific spider--ONE NAME=VALUE
: spider argument definition (can be repeated)--Call back
of-DO
: spider method to use as a callback to parse the response--after
of-METRO
: additional request meta passed to the callback request. This must be a valid json string. Example: --meta='{"foo" : "bar"}'--cbkwargs
: Additional keyword arguments passed to the callback. This must be a valid json string. Example: --cbkwargs='{"foo" : "bar"}'(Video) 10 . Appium Native App Automation, UI Automator Native Element Inspector, & Project (Part 10 of 11)-- pipelines
: Process elements through pipes--Regulations
of-R
: second handcrawlspiderrules to discover the callback (i.e. the spider method) to use to parse the response--no object
: Do not show a scratching post--geenlinks
: Do not show exported links--colorless
: avoid using pigments to color the output--depth
of-EY
: depth level for which requests will be traced recursively (default: 1)-- extended
of-v
: View information for each depth level--export
of-HE
: dump scraped objects to a fileNew in version 2.3.
Usage example:
$ scrapy parse http://www.example.com/ -c parse_item[ ... scrapy log lines crawl example.com spider ... ]>>> DEEP STATE LEVEL 1 <<<# Scratched elements ----- . -------------------------------------------------------------------------------------------------- - ----[{'name': 'Sample item', 'category': 'Furniture', 'length': '12 cm'}]# Request ------------- - - - ---------------------------------------------------------------- - [ ]
institutions¶
Pension:
just institutions [options]
Required project:Gender
Get the value of a Scrapy configuration.
If used in a project, the value of the project setting is displayed; otherwise, the Scrapy default value for that configuration is displayed.
Usage example:
$ scrapy config --get BOT_NAMEscrapybot$ scrapy config --get DOWNLOAD_DELAY0
flowing rotation¶
Pension:
just flowing rotation
Required project:Gender
Run a standalone spider in a Python file, without having to create a project.
Usage example:
$ scrapy runspider myspider.py[ ... spider starts to crawl ... ]
version¶
Pension:
just version [-v]
Required project:Gender
Print the Scrapy version. If used with-v
It also prints Python, Twisted, and Platform information, which is useful for bug reports.
banco¶
Pension:
just banco
Required project:Gender
Run a quick reference test.Comparative evaluation.
Custom Project Assignments¶
You can also add your custom project commands using itCOMMAND'S_MODULEinstitution. View Scrapy Commandsscrapy/commandto see examples of how to implement its commands.
COMMAND'S_MODULE¶
Model to follow:''
(empty string)
A module to use to find custom Scrapy commands. This is used to add custom commands to the Scrapy project.
Example:
COMMAND'S_MODULE = "mybot.command's"
Enter commands via setup.py entry points¶
You can also add Scrapy commands from an external library by adding ascrapy.commando's
department at the entrances of the libraryconfiguration.py
duration.
The following example is addedmy mission
command:
van regulatory tools Introduction setting, find_packagessetting( name="scrapy-mijnmodule", entry points={ "scrapy.command's": [ "my_command=my_scrapy_module.commands:MyCommand", ], },)
FAQs
How to run Scrapy from command line? ›
Using the scrapy tool
You can start by running the Scrapy tool with no arguments and it will print some usage help and the available commands: Scrapy X.Y - no active project Usage: scrapy <command> [options] [args] Available commands: crawl Run a spider fetch Fetch a URL using the Scrapy downloader [...]
- The response.css() method get tags with a CSS selector. To retrieve all links in a btn CSS class: response.css("a.btn::attr(href)")
- The response.xpath() method gets tags from a XPath query. To retrieve the URLs of all images that are inside a link, use: response.xpath("//a/img/@src")
The key to running scrapy in a python script is the CrawlerProcess class. This is a class of the Crawler module. It provides the engine to run scrapy within a python script. Within the CrawlerProcess class code, python's twisted framework is imported.
How to run script from command line? ›To execute a script from the command line
Call the Command Manager executable, cmdmgr.exe, with the parameters listed in Command line syntax.
Type "start [filename.exe]" into Command Prompt, replacing "filename" with the name of your selected file. Replace "[filename.exe]" with your program's name. This allows you to run your program from the file path. For example, you can run Google Chrome by typing "Start Chrome.exe."
How do I extract text from Scrapy? ›- /html/head/title − This will select the <title> element, inside the <head> element of an HTML document.
- /html/head/title/text() − This will select the text within the same <title> element.
- //td − This will select all the elements from <td>.
- Extraction: Data is taken from one or more sources or systems. ...
- Transformation: Once the data has been successfully extracted, it is ready to be refined. ...
- Loading: The transformed, high quality data is then delivered to a single, unified target location for storage and analysis.
- Generally, it takes about one to six months to learn the fundamentals of Python, that means being able to work with variables, objects & data structures, flow control (conditions & loops), file I/O, functions, classes and basic web scraping tools such as requests library.
How do I run a Python file from the command line? ›The most basic and easy way to run a Python script is by using the python command. You need to open a command line and type the word python followed by the path to your script file like this: python first_script.py Hello World! Then you hit the ENTER button from the keyboard, and that's it.
How do I run a Python app from the command line? ›To run Python scripts with the python command, you need to open a command-line and type in the word python , or python3 if you have both versions, followed by the path to your script, just like this: $ python3 hello.py Hello World! If everything works okay, after you press Enter , you'll see the phrase Hello World!
How do I run Scrapy on Windows? ›
- Create a virtual environment. First thing first, it is highly recommended to create a virtual environment and install Scrapy in the virtual environment created. ...
- Activate the virtual environment. ...
- Install Scrapy via conda-forge channel. ...
- Use Scrapy to create a new project.
Python Scrapy - Although not as popular as it once was, Scrapy is still the go-to-option for many Python developers looking to build large scale web scraping infrastructures because of all the functionality ready to use right out of the box.
Which command is used to crawl the data from the website using Scrapy library? ›You have to run a crawler on the web page using the fetch command in the Scrapy shell. A crawler or spider goes through a webpage downloading its text and metadata.
How does Scrapy pipeline work? ›Description. Item Pipeline is a method where the scrapped items are processed. When an item is sent to the Item Pipeline, it is scraped by a spider and processed using several components, which are executed sequentially. Keep processing the item.
How do I enable running scripts in cmd? ›- Open the Group Policy Management Editor and create a new policy.
- Expand Computer Configuration.
- Navigate to Policies > Administrative Templates > Windows Components > Windows PowerShell.
- Open the setting Turn on Script Execution.
- Press "Enter" on the keyboard after every command you enter into Terminal.
- You can also execute a file without changing to its directory by specifying the full path. Type "/path/to/NameOfFile" without quotation marks at the command prompt. Remember to set the executable bit using the chmod command first.
Open Command Prompt in Windows
Click Start and search for "Command Prompt." Alternatively, you can also access the command prompt by pressing Ctrl + r on your keyboard, type "cmd" and then click OK.
Process command line can tell us how an application was intended to be used and in some cases can supply us directly with adversary payloads. For example, adversaries often supply malicious encoded PowerShell commands directly at the command line using any of the -EncodedCommand parameter variations.
Which command lists all running processes at the command line? ›You can call wmic process list to see all processes.
How do I get HTML in Scrapy? ›- Scrapy/Parsel selectors' . re() and . re_first() methods replace HTML entities (except < , & )
- instead, use . extract() or . extract_first() to get raw HTML (or raw JavaScript instructions) and use Python's re module on extracted string.
What is the difference between extract and get in scrapy? ›
Scrapy has two main methods used to “extract” or “get” data from the elements that it pulls of the web sites. They are called extract and get . extract is actually the older method, while get was released as the new successor to extract .
How do I save Scrapy output to CSV? ›To save to a CSV file add the flag -o to the scrapy crawl command along with the file path you want to save the file to. You have two options when using this command, use are small -o or use a capital -O . Appends new data to an existing file. Overwrites any existing file with the same name with the current data.
Which command is used to extract data? ›isql enables you to extract data definition statements from a database and store them in an output file.
What command is used to to extract data from a database? ›An SQL SELECT statement retrieves records from a database table according to clauses (for example, FROM and WHERE ) that specify criteria. The syntax is: SELECT column1, column2 FROM table1, table2 WHERE column2='value';
What methods are used to extract data? ›There are three main types of data extraction in ETL: full extraction, incremental stream extraction, and incremental batch extraction. Full extraction involves extracting all the data from the source system and loading it into the target system.
What are the cons of Scrapy? ›Downsides of Scrapy
Learning Curve: Scrapy has a steep learning curve than Beautiful Soup, especially for Python beginners. It is a complex framework with many features and functions. This may make it more challenging to use and configure.
₹20L - ₹23L (Employer Est.)
Is web scraping a good skill to learn? ›Without web scraping knowledge, it would very difficult to amass large amounts of data that can be used for analysis, visualization and prediction. For example, without tools like Requests and BeautifulSoup, it would be very difficult to scrape Wikipedia's S&P500 historical data.
What is command line in Python? ›A command-line interface (CLI) provides a way for a user to interact with a program running in a text-based shell interpreter. Some examples of shell interpreters are Bash on Linux or Command Prompt on Windows. A command-line interface is enabled by the shell interpreter that exposes a command prompt.
How do I run a Python script with command line arguments? ›Here Python script name is script.py and rest of the three arguments - arg1 arg2 arg3 are command line arguments for the program. There are following three Python modules which are helpful in parsing and managing the command line arguments: sys module.
How to install Python in cmd? ›
To do so, open the command line application Command Prompt (in Windows search, type cmd and press Enter ) or Windows PowerShell (right-click on the Start button and select Windows PowerShell ) and type there python -V .
How do I create a CLI tool in Python? ›- Step 1: Install required python packages. Install the following packages using python-pip. ...
- Step 2: Setup a basic template. ...
- Step 3: Test the CLI Tool. ...
- Step 4: How to execute bash shell commands using subprocess in Python. ...
- Step 5: Build a simple CLI tool to create a new folder based on user input on Ubuntu OS.
- Create a folder named bin in the root directory of your project.
- Inside bin create a file called index. js This is going to be the entry point of our CLI.
- Now open the package. json file and change the “main” part to bin/index. ...
- Now manually add another entry into the package.
You can also automatically compile all Python files using the compileall module. You can do it from the shell prompt by running compileall.py and providing the path of the directory containing the Python files to compile: monty@python:~/python$ python -m compileall .
What is web scraping code with Scrapy? ›What is Scrapy? Scrapy is a free and open-source web crawling framework written in Python. It is a fast, high-level framework used to crawl websites and extract structured data from their pages. It can be used for a wide range of purposes, from data mining to monitoring and automated testing.
What is the Scrapy package in Python? ›Scrapy is a Python open-source web crawling framework used for large-scale web scraping. It is a web crawler used for both web scraping and web crawling. It gives you all the tools you need to efficiently extract data from websites, process them as you want, and store them in your preferred structure and format.
What is better Selenium or Scrapy? ›The nature of work for which they're originally developed is different from one another. Selenium is an excellent automation tool and Scrapy is by far the most robust web scraping framework. When we consider web scraping, in terms of speed and efficiency Scrapy is a better choice.
What is the fastest scraping library? ›Scrapy is the most efficient web scraping framework on this list, in terms of speed, efficiency, and features. It comes with selectors that let you select data from an HTML document using XPath or CSS elements. An added advantage is the speed at which Scrapy sends requests and extracts the data.
What is the best web scraping tool Python? ›- Beautiful Soup.
- Requests.
- Scrapy.
- Selenium.
- Playwright.
- Lxml.
- Urllib3.
- MechanicalSoup.
Web scraping aims to extract the data on web pages, and web crawling purposes to index and find web pages. Web crawling involves following links permanently based on hyperlinks. In comparison, web scraping implies writing a program computing that can stealthily collect data from several websites.
How do I run Scrapy on a server? ›
To run jobs using Scrapyd, we first need to eggify and deploy our Scrapy project to the Scrapyd server. To do this, there is a easy to use library called scrapyd-client that makes this process very simple. Here the scrapyd. cfg configuration file defines the endpoint your Scrapy project should be be deployed to.
Is Scrapy free to use? ›Features of Scrapy
Scrapy is an open source and free to use web crawling framework.
Basic Script
The key to running scrapy in a python script is the CrawlerProcess class. This is a class of the Crawler module. It provides the engine to run scrapy within a python script. Within the CrawlerProcess class code, python's twisted framework is imported.
Built using Twisted, an event-driven networking engine, Scrapy uses an asynchronous architecture to crawl & scrape websites at scale fast. With Scrapy you write Spiders to retrieve HTML pages from websites and scrape the data you want, clean and validate it, and store it in the data format you want.
How do I run Scrapy on Linux? ›- Launch terminal application.
- Install python3-scrapy package using apt. $ sudo apt install --assume-yes python3-scrapy Reading package lists... ...
- Run scrapy command at the terminal to test if installation has completed successfully.
The run command is launched in KDE and GNOME desktop platforms by pressing Alt+F2.
How to run Linux from command line? ›If you can't find a launcher, or if you just want a faster way to bring up the terminal, most Linux systems use the same default keyboard shortcut to start it: Ctrl-Alt-T.
How do I run a tool in Linux? ›The keyboard shortcut is Ctrl + Alt + T. You can also click the Terminal icon in your Apps menu. It generally has an icon that resembles a black screen with a white text cursor. Type the name of the program and press ↵ Enter .
Does Scrapy work on Windows? ›Installing Scrapy. If you're using Anaconda or Miniconda, you can install the package from the conda-forge channel, which has up-to-date packages for Linux, Windows and macOS. We strongly recommend that you install Scrapy in a dedicated virtualenv, to avoid conflicting with your system packages.
What is the command to install PIP? ›Ensure you can run pip from the command line
Run python get-pip.py . 2 This will install or upgrade pip. Additionally, it will install setuptools and wheel if they're not installed already. Be cautious if you're using a Python install that's managed by your operating system or another package manager.
How to create Python file from command line Windows? ›
You can create a Python file by typing “vim” along with the file name in the Terminal. For example, you can create a new Python file called “hello.py” by typing “vim hello.py” in the terminal. This will open a new file in Vim where you can start writing your Python code.
How do I run Python on Windows? ›Go to your Start menu (lower left Windows icon), type "Microsoft Store", select the link to open the store. Once the store is open, select Search from the upper-right menu and enter "Python". Select which version of Python you would like to use from the results under Apps.