All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

When upgrading apps/add-ons in a distributed environment, is there a recommended best practice or is it similar to deploying the app initially where I can just paste the newer downloaded version from... See more...
When upgrading apps/add-ons in a distributed environment, is there a recommended best practice or is it similar to deploying the app initially where I can just paste the newer downloaded version from Splunkbase over the existing app and then push the new bundle to the peers to fully update the app? And also is there any scenario where a rolling restart for this wouldn’t be required? Thanks in advance
I was watching a module 3 in Training. When I type and enter tar xvzf splunk-8.0.3-a6754d8441bf-Linux-x86_64.tgz -C /opt following the video. It shows an error (tar: splunk: Cannot mkdir: Permissio... See more...
I was watching a module 3 in Training. When I type and enter tar xvzf splunk-8.0.3-a6754d8441bf-Linux-x86_64.tgz -C /opt following the video. It shows an error (tar: splunk: Cannot mkdir: Permission denied). How can I fix this issue?
I have a search which captures data from all the machines on the network and calculates OS Health of each machine (host). I am displaying it like this OSHealth DeviceCount Percentage 5 ... See more...
I have a search which captures data from all the machines on the network and calculates OS Health of each machine (host). I am displaying it like this OSHealth DeviceCount Percentage 5 288 35% 4 150 20% I want to add a sparkline to show the trend of changing percentage and add it like another column. I have tried alot based on splunk docs but it always show as a straight line. I would really appreciate some help. My search is attached below | stats count(host) AS DeviceCount by OSHealth | eventstats sum(DeviceCount) AS SumDevice | eval Percentage = round((DeviceCount/SumDevice)*100,1) | stats sparkline avg(Percentage) as Trend by OSHealth DeviceCount | table OSHealth DeviceCount Percentage
Only for the stanza icann_top_level_domain_list , we are getting error "threat list download failed after multiple retries" Learn more.list" retries_remaining="-1" status="threat list download failed... See more...
Only for the stanza icann_top_level_domain_list , we are getting error "threat list download failed after multiple retries" Learn more.list" retries_remaining="-1" status="threat list download failed after multiple retries" url="https://data.iana.org/TLD/tlds-alpha-by-domain.txt" Here is the sample log 2020-04-16 23:30:15,664+0000 INFO pid=17793 tid=MainThread file=threatlist.py:download_csv:390 | stanza="icann_top_level_domain_list" retries_remaining="2" status="retrying download" retry_interval="60" url="https://data.iana.org/TLD/tlds-alpha-by-domain.txt" 2020-04-16 23:31:45,919+0000 INFO pid=17793 tid=MainThread file=threatlist.py:download_csv:390 | stanza="icann_top_level_domain_list" retries_remaining="1" status="retrying download" retry_interval="60" url="https://data.iana.org/TLD/tlds-alpha-by-domain.txt" 2020-04-16 23:33:16,208+0000 INFO pid=17793 tid=MainThread file=threatlist.py:download_csv:390 | stanza="icann_top_level_domain_list" retries_remaining="0" status="retrying download" retry_interval="60" url="https://data.iana.org/TLD/tlds-alpha-by-domain.txt" 2020-04-16 23:34:16,256+0000 INFO pid=17793 tid=MainThread file=threatlist.py:download_csv:417 | stanza="icann_top_level_domain_list" retries_remaining="-1" status="threat list download failed after multiple retries" url="https://data.iana.org/TLD/tlds-alpha-by-domain.txt" 2020-04-17 02:51:43,154+0000 INFO pid=17044 tid=MainThread file=threatlist.py:run:459 | status="continuing" msg="Processing stanza" name="threatlist://icann_top_level_domain_list" 2020-04-17 02:51:43,154+0000 INFO pid=17044 tid=MainThread file=threatlist.py:run:473 | status="retrieved_checkpoint_data" stanza="icann_top_level_domain_list" last_run="1587079694.838963" 2020-04-17 02:51:43,154+0000 INFO pid=17044 tid=MainThread file=threatlist.py:download_csv:364 | status="CSV download starting" stanza="icann_top_level_domain_list" 2020-04-17 02:52:13,381+0000 INFO pid=17044 tid=MainThread file=threatlist.py:download_csv:390 | stanza="icann_top_level_domain_list" retries_remaining="3" status="retrying download" retry_interval="60" url="https://data.iana.org/TLD/tlds-alpha-by-domain.txt" 2020-04-17 02:53:43,697+0000 INFO pid=17044 tid=MainThread file=threatlist.py:download_csv:390 | stanza="icann_top_level_domain_list" retries_remaining="2" status="retrying download" retry_interval="60" url="https://data.iana.org/TLD/tlds-alpha-by-domain.txt" 2020-04-17 02:55:13,916+0000 INFO pid=17044 tid=MainThread file=threatlist.py:download_csv:390 | stanza="icann_top_level_domain_list" retries_remaining="1" status="retrying download" retry_interval="60" url="https://data.iana.org/TLD/tlds-alpha-by-domain.txt" 2020-04-17 02:56:44,174+0000 INFO pid=17044 tid=MainThread file=threatlist.py:download_csv:390 | stanza="icann_top_level_domain_list" retries_remaining="0" status="retrying download" retry_interval="60" url="https://data.iana.org/TLD/tlds-alpha-by-domain.txt" 2020-04-17 02:57:44,234+0000 INFO pid=17044 tid=MainThread file=threatlist.py:download_csv:417 | stanza="icann_top_level_domain_list" retries_remaining="-1" status="threat list download failed after multiple retries" url="https://data.iana.org/TLD/tlds-alpha-by-domain.txt" 2020-05-02 02:51:43,206+0000 INFO pid=23520 tid=MainThread file=threatlist.py:run:459 | status="continuing" msg="Processing stanza" name="threatlist://icann_top_level_domain_list" 2020-05-02 02:51:43,207+0000 INFO pid=23520 tid=MainThread file=threatlist.py:run:473 | status="retrieved_checkpoint_data" stanza="icann_top_level_domain_list" last_run="1587091903.1543882" 2020-05-02 02:51:43,207+0000 INFO pid=23520 tid=MainThread file=threatlist.py:download_csv:364 | status="CSV download starting" stanza="icann_top_level_domain_list" 2020-05-02 02:52:13,858+0000 INFO pid=23520 tid=MainThread file=threatlist.py:download_csv:390 | stanza="icann_top_level_domain_list" retries_remaining="3" status="retrying download" retry_interval="60" url="https://data.iana.org/TLD/tlds-alpha-by-domain.txt" 2020-05-02 02:53:44,127+0000 INFO pid=23520 tid=MainThread file=threatlist.py:download_csv:390 | stanza="icann_top_level_domain_list" retries_remaining="2" status="retrying download" retry_interval="60" url="https://data.iana.org/TLD/tlds-alpha-by-domain.txt" 2020-05-02 02:55:14,407+0000 INFO pid=23520 tid=MainThread file=threatlist.py:download_csv:390 | stanza="icann_top_level_domain_list" retries_remaining="1" status="retrying download" retry_interval="60" url="https://data.iana.org/TLD/tlds-alpha-by-domain.txt" 2020-05-02 02:56:44,681+0000 INFO pid=23520 tid=MainThread file=threatlist.py:download_csv:390 | stanza="icann_top_level_domain_list" retries_remaining="0" status="retrying download" retry_interval="60" url="https://data.iana.org/TLD/tlds-alpha-by-domain.txt" 2020-05-02 02:57:44,703+0000 INFO pid=23520 tid=MainThread file=threatlist.py:download_csv:417 | stanza="icann_top_level_domain_list" retries_remaining="-1" status="threat list download failed after multiple retries" url="https://data.iana.org/TLD/tlds-alpha-by-domain.txt"
Hi, I am using the SaaS controller for AppDynamics for testing the REST call. I have to generate the token through REST call https://tataconsultancyservicesltd789.saas.appdynamics.com/controller... See more...
Hi, I am using the SaaS controller for AppDynamics for testing the REST call. I have to generate the token through REST call https://tataconsultancyservicesltd789.saas.appdynamics.com/controller/api/oauth/access_token It was working till last 29/04/2020. Now, I am not able to get the Token and gets 500 Internal Server Error.
Hello Team, I have requirement that is I need to send the schedule mail with PDF which should contain the multiple report results. I have tried with an alert and report but i got to know that head ... See more...
Hello Team, I have requirement that is I need to send the schedule mail with PDF which should contain the multiple report results. I have tried with an alert and report but i got to know that head a time we can prepare one search query in the alert or report. But can we add multiple search queries separating with panels like Splunk dashboard panels so that i can schedule it for mail communication with Pdf attachment. Please suggest.
I am trying to table all accounts that have particular error , i am able to capture error from response along with count but want to build / relate that error to corresponding account in request URI.... See more...
I am trying to table all accounts that have particular error , i am able to capture error from response along with count but want to build / relate that error to corresponding account in request URI. Not able to capture that only relation by which i can bind request and response is TraceId. need help to table both.
Hi everyone I was reading through "endpoint security analyst with Splunk (online experience)" which you can find here: http://si_usecase_02.splunkoxygen.com/en-US/app/OLE_Security_Endpoint/sec_s... See more...
Hi everyone I was reading through "endpoint security analyst with Splunk (online experience)" which you can find here: http://si_usecase_02.splunkoxygen.com/en-US/app/OLE_Security_Endpoint/sec_search_01?tour=gs_main_intro This is a four exercises tutorial that will show you how to detect and prevent advanced malware, anyway I was moving along with the tutorial step by step and this statement caught my attention: Any process activities with a command line command length that is more than four times the average and standard deviation command line command lengths for each host is an outlier My question is why? is this a standard formula? an axiom? where did this come from? Here is the query that was used in this tutorial: sourcetype=xmlwineventlog:microsoft-windows-sysmon/operational EventCode=1 | eval cmdlen=len(CommandLine) | eventstats avg(cmdlen) as avg, stdev(cmdlen) as stdev by host | stats max(cmdlen) as maxlen, values(avg) as avgperhost, values(stdev) as stdevperhost by host, CommandLine | eval threshold = 4 * ( stdevperhost + avgperhost ) | where maxlen > threshold Here you can see it in the second eval command (in the line before the last). Thanks in advance.
My column chart currently looms like this, to make this more readable format i have to increase width and need to enable horizontal scrollbar, tried the below CSS still it is not working.`#manager... See more...
My column chart currently looms like this, to make this more readable format i have to increase width and need to enable horizontal scrollbar, tried the below CSS still it is not working.`#manager_chart .dashboard-element chart{ width: 110% !important; overflow: scroll; overflow-x: visible !important; overflow-y: scroll }` Thanks in advance.
I appologize if this is already answered. I'm having trouble figuring out how to even search for it. I am trying to search through logs that have two fields: user and software version. I am try... See more...
I appologize if this is already answered. I'm having trouble figuring out how to even search for it. I am trying to search through logs that have two fields: user and software version. I am trying to find a way to search through this data and return any user who has not upgraded their software so I would like to be able to return a list of all users who have never matched software version x but have matches other versions. How would I search for that?
When using SSO with clustered search heads, users who lose SSO access leave behind knowledge objects and directories on the file system. I'm doing some work to clean these up. In order to be able to ... See more...
When using SSO with clustered search heads, users who lose SSO access leave behind knowledge objects and directories on the file system. I'm doing some work to clean these up. In order to be able to query the Splunk API for the full set of information, it's necessary to re-create the user so that Splunk will see and return information about their private knowledge objects. While doing this, I noticed the following problem: I can ask the API about the knowledge objects owned by a user via /servicesNS/-/-/admin/directory However, if I ask about saved searches from /servicesNS/USERNAME/search/saved/searches/SEARCH_NAME, I get a 404 for searches that are present in (1) After 5-10 seconds, (2) begins to work I suspected that maybe the search head cluster needs to sync some configuration, so I hit /services/shcluster/status while (2) is failing repeatedly to get some info about the search head cluster state. None of the search heads specify they are out of sync, and the last conf replication time reset (indicating a configuration replication had happened), but the API was still returning 404s on saved searches for a few seconds. Is there any way to know when it's "safe" to request information pertaining to a user? Is the /directory endpoint potentially affected by this? Are there other endpoints that may be affected in the same way? One other thing I tested was querying /servicesNS/-/search/saved/searches/SEARCH_NAME, however it exhibited the same behavior. Not all users seem to behave this way, but the particular user in question had a couple knowledge objects of type "props-extract". It seems likely that re-adding those to the system is taking longer, and this added delay somehow affects their saved searches showing up.
Ever since I had to upgrade my phone to 13.4.1 for work compliance Health Post consistently crashes. I've got the log, and crash dump, but don't have enough Karma to attach or link here.
Hello, I am new to Splunk and am trying to get the browsing histroy analysis app running on splunk eterprise (demo) that is capturing local logs as well as logs from a UF. I am stummped as I do ... See more...
Hello, I am new to Splunk and am trying to get the browsing histroy analysis app running on splunk eterprise (demo) that is capturing local logs as well as logs from a UF. I am stummped as I do not know what data input to use and there is not very much documentaion. That I have be able to find. Any help getting the data input setup would bre greatly appreciated. Thanks
I am trying to install splunk 8.0.3 on mac 10.15.2 and seeing issues with the installation. followed this guide -> https://docs.splunk.com/Documentation/Splunk/8.0.3/Installation/InstallonMacOS and t... See more...
I am trying to install splunk 8.0.3 on mac 10.15.2 and seeing issues with the installation. followed this guide -> https://docs.splunk.com/Documentation/Splunk/8.0.3/Installation/InstallonMacOS and tried with both dmg and tgz and i get the error like apple cannot verify splunk for malacious software. Can you please help?
I am trying to make sure I know how to configure an environment to ingest weblogs that are correctly parsed and I am running into trouble in that I am only getting 1 single event. I have used feedba... See more...
I am trying to make sure I know how to configure an environment to ingest weblogs that are correctly parsed and I am running into trouble in that I am only getting 1 single event. I have used feedback provided to similar questions to build out my configurations. Note that the original intent of this exercise was to see what would the different effect be with two different props.conf. My weblog source is this on both forwarders: '<photo id="123" title="Birthday" format="jpg"> <owner id="1111">Jason</owner> <CreationDate>2009-11-06T02:22:37.063</CreationDate> <comments> <comment ownerid="112">Good pic!</comment> <comment ownerif="223">Happy birthday</comment> <comments> </photo> <photo id="123" title="Birthday" format="jpg"> <owner id="1111">Jason</owner> <CreationDate>2009-11-06T02:22:37.063</CreationDate> <comments> <comment ownerid="112">Good pic!</comment> <comment ownerif="223">Happy birthday</comment> <comments> </photo> <photo id="123" title="Birthday" format="jpg"> <owner id="1111">Jason</owner> <CreationDate>2009-11-06T02:22:37.063</CreationDate> <comments> <comment ownerid="112">Good pic!</comment> <comment ownerif="223">Happy birthday</comment> <comments> </photo>' My inputs.conf on FW1is this: '[monitor:///home/labuser/xmldata/] index=web sourcetype=xml disabled=false' My inputs.conf on FW2 is this so that I could figure out which props.conf works: '[monitor:///home/labuser/xmldata/] index=web2 sourcetype=xml2 disabled=false' My props.conf on FW1 is this: 'KV_MODE = xml LINE_BREAKER = () MUST_BREAK_AFTER = \ NO_BINARY_CHECK = 1 SHOULD_LINEMERGE = false TRUNCATE = 0 TIME_PREFIX = \ TIME_FORMAT = %Y-%m-%dT%H:%M:%S.%3N' My props.conf on FW2 is this: 'KV_MODE = xml LINE_BREAKER = ([\r\n]+)() MUST_BREAK_AFTER = \ NO_BINARY_CHECK = 1 SHOULD_LINEMERGE = false TRUNCATE = 0 TIME_PREFIX = \ TIME_FORMAT = %Y-%m-%dT%H:%M:%S.%3N' All the data in both web index and web2 index looks identical in Splunk that both index=web1 or index=web2 produce identical results in that I only get a single event back instead of multiple events. What am I doing wrong?
I don't know what is wrong
I want to compare two files (big 200.000.00 and 150.000.000 lines). These are lists of domain names. I want to make the difference list. The first file is from an export from Splunk. Example: tmpzo... See more...
I want to compare two files (big 200.000.00 and 150.000.000 lines). These are lists of domain names. I want to make the difference list. The first file is from an export from Splunk. Example: tmpzonefile: twotwotwo.nl two.nl three.nl four.nl five.nl tmpingestedzonefile: twotwo.nl three.nl four.nl Diff file must be: twotwo.nl five.nl The following script yields too much. Any idea what goes wrong here? And it takes forever to process large files. if debug == 1: print('DEBUG: Number of ingested domains returned: %s' % str(count)) print('DEBUG: Missing domains: %s' % str(numdomains-count)) # Determine missing domains tmpzonefile_f = open(tmpzonefile) tmpingestedzonefile_f = open(tmpingestedzonefile) difffile = open('/tmp/'+zone+'_zone_full.txt', 'wt') old = [line.strip() for line in tmpzonefile_f] new = [line.strip() for line in tmpingestedzonefile_f] count = 0 for line in old: if line not in new: count += 1 difffile.write(line+'\n') print('DEBUG: Number of domain written to difffile file: %s' % str(count)) tmpzonefile_f.close() tmpingestedzonefile_f.close() difffile.close()
Hello, I am trying to pull min and max time for each user: index="iptv_rdkb" [|inputlookup usersfile.csv] | fields _time Source device.make model userId | stats count by Source make model user... See more...
Hello, I am trying to pull min and max time for each user: index="iptv_rdkb" [|inputlookup usersfile.csv] | fields _time Source device.make model userId | stats count by Source make model userId _time | eventstats max(_time) AS max min(_time) AS min | eval max=strftime(max, "%Y/%m/%d %T.%3Q") | eval min=strftime(min, "%Y/%m/%d %T.%3Q") | stats earliest(min) as min earliest(max) as max first(make) as make first(model) as model first(userId) as user by userId Results: Source min max make model userid b661834 2020-04-10 2020/04/10 TECHN xyz 1 b654623 2020-04-10 2020/04/10 TECHN xyz 2 b637895 2020-04-10 2020/04/10 TECHN xyz 3 This search gives me the same time for each record. For example, if minimum time is 2020-04-10 in any of the records, it will give this date/time in every record instead of giving min-max of that specific user. I need min and max for each specific user. Please help.
Hi, recently apple rejected all the apps that contains the old UIWebView APIs. Unfortunately there is still UIWebView in the lastet release Splunk MINT iOS SDK, which cause my app being rejected. ... See more...
Hi, recently apple rejected all the apps that contains the old UIWebView APIs. Unfortunately there is still UIWebView in the lastet release Splunk MINT iOS SDK, which cause my app being rejected. I would like to ask whether you guys have a plan to remove UIWebView?