Wishlist 0 ¥0.00

Thread Safe 和 Non Thread Safe 的选择?

       首先,Thread Safe 是指程序在运行时需对线程(thread)进行安全检查,以防止有新要求就启动新线程的 CGI 执行方式耗尽系统资源。None Thread Safe 则指程序在运行时不对线程进行安全检查。

  再来看 PHP 的两种服务模式:ISAPI 和 FastCGI 。ISAPI 服务模式是以 DLL 动态库的形式被调用,可以在被用户请求后执行,在处理完一个用户请求后不会马上消失,所以需要进行线程安全检查,这样来提高程序的执行效率。而 FastCGI 执行方式则是以单一线程来执行操作,所以不需要进行线程的安全检查,除去线程安全检查反而可以提高执行效率。

  所以,简单的概括就是当 PHP 以 ISAPI 模式运行服务时,选择 Thread Safe 版本;以 FastCGI 模式运行服务时,选择 Non Thread Safe 版本。可以通过 phpinfo() 函数页面里的 Thread Safety 项查询当前 PHP 版本是不是线程安全的。

注:ISAPI 和 FastCGI 无需在 PHP 中设置,其取决于Web Server(Apache、IIS、Nginx)以哪种模式与 PHP 合作完成服务。

How to Download an Entire Website for Offline Reading

Here's how you can download entire websites for offline reading so you have access even when you don't have Wi-Fi or 4G.

Although Wi-Fi is available everywhere these days, you may find yourself without it from time to time. And when you do, there may be certain websites you wish you could save and access while offline---perhaps for research, entertainment, or posterity.

 
How to Find Free Unlimited Wi-Fi Internet Access Almost Anywhere

There's nothing better than scoring free Wi-Fi. Here are some ways to find free unlimited Wi-Fi no matter where you are.

It's easy enough to save individual web pages for offline reading, but what if you want to download an entire website? Well, it's easier than you think! Here are four nifty tools you can use to download any website for offline reading, zero effort required.

1. WebCopy

 

Available for Windows only.

WebCopy by Cyotek takes a website URL and scans it for links, pages, and media. As it finds pages, it recursively looks for more links, pages, and media until the whole website is discovered. Then you can use the configuration options to decide which parts to download offline.

 

The interesting thing about WebCopy is you can set up multiple "projects" that each have their own settings and configurations. This makes it easy to re-download many different sites whenever you want, each one in the same exact way every time.

One project can copy many websites, so use them with an organized plan (e.g. a "Tech" project for copying tech sites).

How to Download an Entire Website With WebCopy

  1. Install and launch the app.
  2. Navigate to File > New to create a new project.
  3. Type the URL into the Website field.
  4. Change the Save folder field to where you want the site saved.
  5. Play around with Project > Rules… (learn more about WebCopy Rules).
  6. Navigate to File > Save As… to save the project.
  7. Click Copy Website in the toolbar to start the process.

Once the copying is done, you can use the Results tab to see the status of each individual page and/or media file. The Errors tab shows any problems that may have occurred and the Skipped tab shows files that weren't downloaded.

But most important is the Sitemap, which shows the full directory structure of the website as discovered by WebCopy.

To view the website offline, open File Explorer and navigate to the save folder you designated. Open the index.html (or sometimes index.htm) in your browser of choice to start browsing.

2. HTTrack

Grab a webpage for offline reading with WinHTTRack

Available for Windows, Linux, and Android.

HTTrack is more known than WebCopy, and is arguably better because it's open source and available on platforms other than Windows, but the interface is a bit clunky and leaves much to be desired. However, it works well so don't let that turn you away.

Like WebCopy, it uses a project-based approach that lets you copy multiple websites and keep them all organized. You can pause and resume downloads, and you can update copied websites by re-downloading old and new files.

How to Download a Website With HTTrack

  1. Install and launch the app.
  2. Click Next to begin creating a new project.
  3. Give the project a name, category, base path, then click Next.
  4. Select Download web site(s) for Action, then type each website's URL in the Web Addresses box, one URL per line. You can also store URLs in a TXT file and import it, which is convenient when you want to re-download the same sites later. Click Next.
  5. Adjust parameters if you want, then click Finish.
 

Once everything is downloaded, you can browse the site like normal by going to where the files were downloaded and opening the index.html or index.htm in a browser.

3. SiteSucker

 

Available for Mac and iOS.

If you're on a Mac, your best option is SiteSucker. This simple tool rips entire websites and maintains the same overall structure, and includes all relevant media files too (e.g. images, PDFs, style sheets).

It has a clean and easy-to-use interface that could not be easier to use: you literally paste in the website URL and press Enter.

One nifty feature is the ability to save the download to a file, then use that file to download the same exact files and structure again in the future (or on another machine). This feature is also what allows SiteSucker to pause and resume downloads.

SiteSucker costs $5 and does not come with a free version or a free trial, which is its biggest downside. The latest version requires macOS 10.13 High Sierra or later. Older versions of SiteSucker are available for older Mac systems, but some features may be missing.

 

4. Wget

Available for Windows, Mac, and Linux.

Wget is a command-line utility that can retrieve all kinds of files over the HTTP and FTP protocols. Since websites are served through HTTP and most web media files are accessible through HTTP or FTP, this makes Wget an excellent tool for ripping websites.

While Wget is typically used to download single files, it can be used to recursively download all pages and files that are found through an initial page:

wget -r -p https://www.makeuseof.com

However, some sites may detect and prevent what you're trying to do because ripping a website can cost them a lot of bandwidth. To get around this, you can disguise yourself as a web browser with a user agent string:

wget -r -p -U Mozilla https://www.makeuseof.com

If you want to be polite, you should also limit your download speed (so you don't hog the web server's bandwidth) and pause between each download (so you don't overwhelm the web server with too many requests):

 
wget -r -p -U Mozilla --wait=10 --limit-rate=35K https://www.makeuseof.com

Wget comes bundled with most Unix-based systems. On Mac, you can install Wget using a single Homebrew command: brew install wget (how to set up Homebrew on Mac). On Windows, you'll need to use this ported version instead.

Which Websites Do You Want to Download?

Now that you know how to download an entire website, you should never be caught without something to read, even when you have no internet access.

But remember: the bigger the site, the bigger the download. We don't recommend downloading huge sites like MakeUseOf because you'll need thousands of MBs to store all of the media files we use.

The best sites to download are those with lots of text and not many images, and sites that don't regularly add new pages or changed. Static information sites, online ebook sites, and sites you want to archive in case they go down are ideal.

If you're interested in more options for offline reading, take a look at how you can set up Google Chrome for reading books offline. And for other ways to read long articles instead of downloading them, check out our tips and tricks.

现在是 RSS 复兴的时候了!

随着谷歌阅读器的关闭,RSS 随之黯淡,但现在到了 RSS 复兴的时候了。

作者 | Daniel Miessler

译者 | 明明如月,责编 | 郭芮

头图 | CSDN 下载自东方IC

以下为译文:

2005年左右上网的很多人都记得 RSS 这玩意。RSS 是 really simple syndication (简单信息聚合) 的简称,它允许内容创作者以一种易于理解的方式向全球发布内容。

这个想法(看起来有些奇怪),世界上有数百万人可以创建和发布想法,思想和内容等,然后,喜欢这些内容的人们会将其收集到阅读器中,这就是所谓的 RSS 阅读器(RSS Reader)。

谷歌阅读器(Google Reader )就是众多 RSS 阅读器之一,它红极一时。人们以可以通过阅读器精选各种资源为乐,我们也曾写了很多(https://danielmiessler.com/blog/how-to-effectively-manage-and-process-your-rss-feeds-gtd/)如何高效管理 RSS 内容来源的文章。

RSS 是创作者和读者之间的桥梁,将某人 feed (RSS中用来接收该信息来源更新的接口)到你的 RSS 阅读器就意味着“我想订阅你对现实的解读”。

在阅读器中管理订阅源也就意味着管理自己的世界观,这些订阅源由成百上千个独立的思想组成。

先攻击谁已经无从考证,但是我们却遮蔽了天空 。 ——Morpheus

很多事情最终损害了 RSS 的发展,其中包括谷歌阅读器的关闭。像 Slashdot、Digg 和 Reddit 这种聚合站点的崛起,承担了内容聚合的重任,某种程度上将 RSS 的地位取而代之。

有了这些内容聚合站点,你不再需要管理自己的订阅源,只需要登录一个网站即可看到“最好”的内容。

上图为我 2005 年从 Digg 上看到的一张照片。

毫无疑问,Reddit 和其他的内容聚合网站一样都很出色。但我觉得正是因为它们的存在,少了很多乐趣。首先,这些内容聚合网站打破了读者和创作者之间的直接联系, Digg 为你呈现内容而不是你追随多年的 Kristen 直接传达给你;其次它剥夺了你定制自己内容的欲望。

获得某样东西所需的努力越少,它对你的价值就越少。

也许现在最大的问题是网络的广告模式,这与 RSS 的理念背道而驰。使用 RSS,你可以获得内容本身,读者可以选择以不同的方式显示内容。广告商却不喜欢这样,他们希望你看到原始网站,以便让你看到他们设计的广告。

我敢肯定,正如聚合站点一样,社交媒体网站的出现也产生了深远影响,他们的出现让你获取信息所需要付出的努力越来越小。就像在《机器人瓦力》中的场景一样,我们变成了坐在气垫椅上超级肥胖的人,穿梭在各种各种刺激物之间。

这些因素都破坏了我们直接从数据源头获取内容的模式。

在我看来用 RSS 就像自己动手做饭。你可以挑选原材料,使用特定的工具,然后产出结果。它就像是定制自己的书架上的内容。唯一的区别是书架通常只有自己喜欢的书,而我认为 RSS 阅读器应该内容多种多样。

和任何爱好一样,努力增加了产出的意义。

RSS 阅读器的管理迫使人们思考他们想要的输入,并在这个过程中塑造了他们的价值观。你是否建立了一个适合自己的订阅列表?订阅列表中是否包括那些你尊重但持有不同意见的人?是否包含那些你根本无法忍受的人呢?

你选择的内容不仅仅取决于你看待这个世界的观念,还取决于了你对他人观点的理解程度。

好了,总结一下:

当我们不再自定义信息源就意味着我们失去了一些东西。你不必放弃聚合信息网站的优势,只需将他们视为资源即可。RSS 没有乱七八糟的花费,很少有广告,没有弹窗等。管理自己的信息订阅来源非常有意义。想了解信息只需要打开一个地方而不是 n 个地方。RSS 万岁!

原文:https://danielmiessler.com/blog/its-time-to-get-back-into-rss/

译者:明明如月,知名互联网公司 Java 高级开发工程师,CSDN 博客专家。

本文为 CSDN 翻译,转载请注明来源出处。

How to Find an RSS Feed on a Website

Most RSS readers recommend RSS feeds or let you search for them. But, sometimes you need to manually find one if the site you want to subscribe to doesn't show up as a choice in your favorite RSS reader app.

 

Here are several ways to help you find a website's RSS feed so that you can stay updated on all the newest content.

 

Look for the RSS Icon

The easiest way to find an RSS feed is to look for the RSS icon somewhere on the website. If a site has one, they won't be shy in showing it because they want you to subscribe.

 

You can usually find the RSS feed icon at the top or bottom of the site. It's often near a search bar, email newsletter signup form, or social media icons.

 

As you can see in the above screenshot, not all RSS links are orange like the standard RSS icon. They also don't necessarily need to contain this symbol. You might find the RSS feed from a link that reads, "Subscribe for updates," or a totally different symbol or message.

Depending on the website, there might be several different RSS feeds you can subscribe to. To find those links, you might need to do a search or locate the specific area of the site you want to be updated on. If there's an RSS feed for that particular type of content, the icon will appear along with the results.

 

Torrent sites are a prime example of this, since most of them have several categories of information. The Pirate Bay, for instance, has a massive list of RSS feeds.

 
RSS feed links on The Pirate Bay

Edit the URL

Lots of websites provide their RSS feed on a page called feed or rss. To try this, go to the website's home page (erase everything but the domain name) and type /feed or /rss.

 

Here's an example:

 
https://www.lifehack.org/feed
 
RSS feed URL in Internet Explorer

Depending on the website you're on and the browser you're using, what you see next might be a normal-looking web page with a Subscribe button or an XML-formatted page with a bunch of text and symbols.

View the Page Source

Another way you might find the RSS feed is to look "behind" the page. You can do this by viewing its source, which is the raw data your web browser translates into a viewable page.

 
 

Most web browsers let you quickly open the page source with the Ctrl+U or Command+U keyboard shortcut. Once you see the source code, search through it (with Ctrl+F or Command+F) for RSS. You can often find the direct link to the feed somewhere around that line.

 
RSS feed link from page source in Firefox

Use an RSS Feed Finder

There are special tools you can install in your web browser to locate a site's RSS feed(s). These add-ons are super easy to install and usually work really well.

 

If you use Chrome, you might try Get RSS Feed URL or RSS Subscription Extension (by Google). Firefox users have similar options, such as Awesome RSS and Feedbro.

 
Get RSS Feed URL extension for Chrome

Still Can't Find the Site's RSS Feed?

Some websites simply don't use RSS feeds. But, that doesn't mean you're out of luck. There are tools you can use to generate RSS feeds from websites that don't use them, but they don't always work very well.

 

Some examples of RSS generators that let you make a feed from nearly any website include FetchRSS, Feed Creator, PolitePol, Feed43, and Feedity.

 

What to Do After Finding the RSS Feed

After you find the RSS feed you want to subscribe to, you have to use a specific program that can read the data from the feed and update you when the feed changes.

 

First, copy the RSS feed URL by right-clicking it and choosing the copy option. With the address copied, you can paste it into whatever tool you want to use to deliver the news to you. There are online RSS readers, feed readers for Windows, and Mac-supported RSS readers available, plus RSS aggregator tools to join multiple feeds together.

About Us

Since 1996, our company has been focusing on domain name registration, web hosting, server hosting, website construction, e-commerce and other Internet services, and constantly practicing the concept of "providing enterprise-level solutions and providing personalized service support". As a Dell Authorized Solution Provider, we also provide hardware product solutions associated with the company's services.
 

Contact Us

Address: No. 2, Jingwu Road, Zhengzhou City, Henan Province

Phone: 0086-371-63520088 

QQ:76257322

Website: 800188.com

E-mail: This email address is being protected from spambots. You need JavaScript enabled to view it.