All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi, I have a search that uses the chart command to split by 2 fields, such that the results are shown below. The data is split by Name and Month. I would like to add a row with the average o... See more...
Hi, I have a search that uses the chart command to split by 2 fields, such that the results are shown below. The data is split by Name and Month. I would like to add a row with the average of all Names for each month, and a column with the average of all Months for each Name. I have tried using appendpipe and appendcols for each case, but couldn't quite figure out the syntax using a chart command. PS. Each row is already an appended subsearch.
I am using the Splunk Observability REST API, specifically "/apm/trace" endpoint.   I have following questions about throttling limits, which triggers HTTP code 429:  - What is the limit? - If thi... See more...
I am using the Splunk Observability REST API, specifically "/apm/trace" endpoint.   I have following questions about throttling limits, which triggers HTTP code 429:  - What is the limit? - If this is configured anywhere within Splunk Observability? - What types of throttling?  such as Hard, Soft, or Elastic / Dynamic - Does it use fixed window or rolling window, with or without counters? Thanks!
Hi,  Regarding Span, may I ask if the Span ID should always be unique, meaning no 2 different span with same Span ID? Thanks
Hello, Any suggestions on onboarding Cradlepoint Router logs to Splunk? Please advise.   Thanks in advance.
Hi All,   I am trying to search difference between 2 search:   search 1:    index="xxx_prd" "/XX900/LT_TEST"   this is returning like 20 records.   search 2: index="xxx_prd" "ht... See more...
Hi All,   I am trying to search difference between 2 search:   search 1:    index="xxx_prd" "/XX900/LT_TEST"   this is returning like 20 records.   search 2: index="xxx_prd" "http://xxx.yyy.com/XX900/LT_TEST" this is returning 15 records.   I want to get the 5 results which are different between search 1 and search 2.   pl advice.   Thanks Yatan
We are receiving error from _internal index  for Json logs: 1. error: ERROR JsonLineBreaker - JSON StreamId:1254678906 had parsing error:Unexpected character: "s" 2. error: ERROR JsonLineBreaker ... See more...
We are receiving error from _internal index  for Json logs: 1. error: ERROR JsonLineBreaker - JSON StreamId:1254678906 had parsing error:Unexpected character: "s" 2. error: ERROR JsonLineBreaker - JSON StreamId:1254678906 had parsing error:Unexpected character: "a"  sample logs: {  [-]        level: debug       message: Creating a new instance of inquiry call       timestamp: 2022-08-25T20:30:45.678Z }   my props.conf: TIME_PREFIX=timestamp" : " TIME_FORMAT= %Y-%m-%dT%H:%M:%S.%3N MAX_TIMESTAMP_LOOKAHEAD=40 TZ=UTC how to resolve this issue.
Hello everyone I have been trying to understand how this alert works because for my point of view doesn't make sense. This message NEVER disappears from our splunk instances and I have been trying ... See more...
Hello everyone I have been trying to understand how this alert works because for my point of view doesn't make sense. This message NEVER disappears from our splunk instances and I have been trying to catch the real root cause but I don't have clear the way this works. I have this message: The percentage of small buckets (75%) created over the last hour is high and exceeded the red thresholds (50%) for index=foo, and possibly more indexes, on this indexer. At the time this alert fired, total buckets created=11, small buckets=8 So I checked if the logs have Time parsing issue and there are not issues with the logs indexed by foo index. Then I checked with this search: index=_internal sourcetype=splunkd component=HotBucketRoller "finished moving hot to warm" | eval bucketSizeMB = round(size / 1024 / 1024, 2) | table _time splunk_server idx bid bucketSizeMB | rename idx as index | join type=left index [ | rest /services/data/indexes count=0 | rename title as index | eval maxDataSize = case (maxDataSize == "auto", 750, maxDataSize == "auto_high_volume", 10000, true(), maxDataSize) | table index updated currentDBSizeMB homePath.maxDataSizeMB maxDataSize maxHotBuckets maxWarmDBCount ] | eval bucketSizePercent = round(100*(bucketSizeMB/maxDataSize)) | eval isSmallBucket = if (bucketSizePercent < 10, 1, 0) | stats sum(isSmallBucket) as num_small_buckets count as num_total_buckets by index splunk_server | eval percentSmallBuckets = round(100*(num_small_buckets/num_total_buckets)) | sort - percentSmallBuckets | eval isViolation = if (percentSmallBuckets > 30, "Yes", "No") | search isViolation = Yes | stats count   I ran that search for the last 2 days and the result is ZERO But the red-flag is still there... So I am not understanding what is going on. Here is the log the indicate that foo is rolling from hot to warm 08-30-2022 02:12:27.121 -0400 INFO HotBucketRoller [1405281 indexerPipe] - finished moving hot to warm bid=foo~19~AAD3329E-C8D9-4607-90FB-167760B4EB6F idx=foo from=hot_v1_19 to=db_1661054400_1628568000_19_AAD3329E-C8D9-4607-90FB-167760B4EB6F size=797286400 caller=size_exceeded _maxHotBucketSize=786432000 (750MB), bucketSize=797315072 (760MB) So as I can see the reason is logic caller=size_exceeded due to the size. Just for information this index receives data just once a day midnight. If you have any inputs I would really appreciate it.   Version 8.2.2
Hello  - I am getting the below error. I am trying to add pipe "|"  for all the results.  Error : Failed to parse templatized search for field 'ResponseTime(ms)' My search :  | table PeriodDate... See more...
Hello  - I am getting the below error. I am trying to add pipe "|"  for all the results.  Error : Failed to parse templatized search for field 'ResponseTime(ms)' My search :  | table PeriodDate VendorName ContractName OccMetricCode Pagekey TransactionType TransactionDatetime ResponseTime(ms) Comment | foreach * [ eval <<FIELD>>="|".<<FIELD>>."|"]    I am not getting pipe seperated results only for ResponseTime PeriodDate  ResponseTime(ms)  Comment  |2022/08/30| 0 ||   Thanks in advance  
I have a standalone instance with existing data on it. I have created a new indexer cluster that does not include this standalone machine. All instances are running the same OS and Splunk version. ... See more...
I have a standalone instance with existing data on it. I have created a new indexer cluster that does not include this standalone machine. All instances are running the same OS and Splunk version. Can I add the existing data to the cluster by adding the standalone instance to the cluster as a peer? What will the behavior be in such a case?  I'm aware of the bucket copying method, but I'm hoping there's a more hands-off method to accomplish this goal. 
Hello, I am looking at https://docs.splunk.com/Documentation/Splunk/9.0.0/Capacity/Parallelization and was wondering which systems to make changes on. For instance: batch parallelization: should ... See more...
Hello, I am looking at https://docs.splunk.com/Documentation/Splunk/9.0.0/Capacity/Parallelization and was wondering which systems to make changes on. For instance: batch parallelization: should the limits be changed on the search heads, indexers, or both? same question for datamodels, report acceleration and indexer parallelization. Oh for what it's worth, I am running splunk enterprise 9, on a C1/C11 deployment. -jason  
I have the following 2 logs DRT.log:  This consists of the following log lines:   {"date_time":"20220823-13:11:11.622475033","severity":"INFO","dc":"DRT"} {"date_time":"20220823-13:11:11.62247... See more...
I have the following 2 logs DRT.log:  This consists of the following log lines:   {"date_time":"20220823-13:11:11.622475033","severity":"INFO","dc":"DRT"} {"date_time":"20220823-13:11:11.622475099","severity":"INFO","version":"1.1.1"} {"date_time":"20220823-13:11:11.622475099","severity":"INFO","state":"running"}   And CME.log: This consists of the following logs lines:   {"date_time":"20220823-13:11:11.622475033","severity":"INFO","dc":"CME"} {"date_time":"20220823-13:11:11.622475099","severity":"INFO","version":"2.2.2"} {"date_time":"20220823-13:11:11.622475033","severity":"INFO","state":"down"}   The output I want to display is a table that looks like the following:   DataCenter Version State DRT 1.1.1 running CME 2.2.2 down   I have noticed that if I specify the explicit source file then them my search query works for that individual source.   As example:    index=exc_md_qa sourcetype="ctc:md:tickerplant" source="/splunk_log/DRT.log" | spath | search severity="INFO" | dc, version, state | stats values(dc) as DataCenter latest(version) as Version latest(state) as State This above search returns: DataCenter Version State DRT 1.1.1 running   And likewise if I replace the source with the other log file, I get this...   index=exc_md_qa sourcetype="ctc:md:tickerplant" source="/splunk_log/CME.log" | spath | search severity="INFO" | fields dc, version, state | stats values(dc) as DataCenter latest(version) as Version latest(state) as State This search yields the following: DataCenter Version State CME 2.2.2 down   However if I run the search with a wildcard for the source, I only get partial results...     index=exc_md_qa sourcetype="ctc:md:tickerplant" source="/splunk_log/*.log" | spath | severity="INFO" | fields dc, version | stats values(dc) as DataCenter latest(version) as Version latest(state) as State This yields the following (with missing data from DRT) DataCenter Version State CME 2.2.2 down DRT Or sorting by DataCenter then I don;t get the state at all... index=exc_md_qa sourcetype="ctc:md:tickerplant" source="/splunk_log/*.log" | spath | severity="INFO" | fields dc, version | stats latest(version) as Version latest(state) as State by dc This yields: DataCenter Version State CME 2.2.2 DRT 1.1.1   So the question is how do I combine them into one search.  I think the brunt of the issue is tying the dc, state and version fields to the same source, but not sure how to do that   Any help is much appreciated!  
Hi There, I have a requirement where i have an index with two different sources. index=a sourcetype=a1 index=a sourcetype=a2 Now i have a column in common between these two sourcetypes. (ex: ... See more...
Hi There, I have a requirement where i have an index with two different sources. index=a sourcetype=a1 index=a sourcetype=a2 Now i have a column in common between these two sourcetypes. (ex: corrlId). I want to display those records which are in source type a1 but not in a2. Would some one tell how to achieve this?   my rough query which i am working on is this: index=a sourcetype=a1 | search "*" trackrequest | eval EDT_time = strftime(_time ,"%Y-%m-%d %H:%M:%S") | rename a.corrlId as CorrlID, EDT_time as "TimeStamp1" | join type=left correlId [search index=a sourcetype=a2 | search "*" trackrequest | eval EDT_time = strftime(_time ,"%Y-%m-%d %H:%M:%S") | rename a.corrlId as CorrlID, EDT_time as "TimeStamp2" ] | table "TimeStamp1", CorrlID, "TimeStamp2"   For my query a single record is repeating n number of times in output with out actually giving me the desired result which is giving all distinct missing values.              
I just upgraded a dev instance from 7.3.4 to 9.0.1, and splunkd would start but the web UI stopped working. Found these in splunkd.log: 08-30-2022 12:43:16.300 -0400 ERROR UiPythonFallback [22665 W... See more...
I just upgraded a dev instance from 7.3.4 to 9.0.1, and splunkd would start but the web UI stopped working. Found these in splunkd.log: 08-30-2022 12:43:16.300 -0400 ERROR UiPythonFallback [22665 WebuiStartup] - Couldn't start appserver process on port 8065: Appserver at http://127.0.0.1:8065 never started up. Set `appServerProcessLogStderr` to "true" under [settings] in web.conf. Restart, try the operation again, and review splunkd.log for any messages that contain "UiAppServer - From appserver" 08-30-2022 12:43:16.300 -0400 ERROR UiPythonFallback [22665 WebuiStartup] - Couldn't start any appserver processes, UI will probably not function correctly! 08-30-2022 12:43:16.300 -0400 ERROR UiHttpListener [22665 WebuiStartup] - No app server is running, stop initializing http server However, after adding the "appServerProcessLogStderr = true" setting to web.conf, I only see this one line in splunkd.log: 08-30-2022 12:48:53.628 -0400 INFO UiAppServer [28199 appserver-stderr] - Starting stderr collecting thread No message with "UiAppServer" after that. Any thoughts / help would be much appreciated!
Hello Splunk team, I have two doubts please help me with details, 1. We are using Splunk cloud platform for Enterprise security. Is there any way to know the time span of buckets for how many days ... See more...
Hello Splunk team, I have two doubts please help me with details, 1. We are using Splunk cloud platform for Enterprise security. Is there any way to know the time span of buckets for how many days we have configured. For example Hot - 90 days Warm- 90 days like this data how to get to know from Splunk GUI, I have used "| dbinspect" in search query but I am unable to get the timing for how many days we have kept Hot, warm etc.,  2. While using a search query we can see the time range "All Time", so here what does it actually mean. Is this mean from when we have configured Splunk or from when logs got ingested or else only the Hot & Warm buckets database data. Thanks in advance for letting me know the details.
Hi Folks, I'm very new at syslog server configuration but I have a question about this. I have an IF (universal forwarder) and I want it to act as a syslog server as well. I want it to receive th... See more...
Hi Folks, I'm very new at syslog server configuration but I have a question about this. I have an IF (universal forwarder) and I want it to act as a syslog server as well. I want it to receive the syslog logs on a different port (not 514). The port 30001 for example.  That port should be open from the Splunk side or from my network side? I appreciate any comment or documents to further understand this. Thanks.
Hi! I have a log like this eventtype=000111 msg=malicious srcip=11.11.22.22 eventtype=123 msg=traffic srcip=11.11.22.22 hostname=MyMachine Both lines are on the same index, would like to get ... See more...
Hi! I have a log like this eventtype=000111 msg=malicious srcip=11.11.22.22 eventtype=123 msg=traffic srcip=11.11.22.22 hostname=MyMachine Both lines are on the same index, would like to get something like this eventtype=000111 msg=malicious srcip=11.11.22.22 hostname=MyMachine I´ve tryied using joins, but they just could get results when indexes are different. because the initial condition of eventtype doesn´t match with the second event. this is the query which doesn´t work index=index_ logid=1122 | fields * | join srcip [search index=index_ | table hostname ] | table eventtype msg srcip hostname Can you help me? Thanks!!
We currently have our Splunk Enterprise instance all running on a stand-alone vm but are looking to add an additional vm for some sort of replication sort of a hot cold standby option or whatever the... See more...
We currently have our Splunk Enterprise instance all running on a stand-alone vm but are looking to add an additional vm for some sort of replication sort of a hot cold standby option or whatever the best practice may be.  Has anyone had experience doing this and what were your steps? 
Hi everyone, I need to remover users that leave the company. I´ve already remove them from company AD, but the remains on the Splunk Cloud. Someone know how can I delete/remove them from Splunk Clo... See more...
Hi everyone, I need to remover users that leave the company. I´ve already remove them from company AD, but the remains on the Splunk Cloud. Someone know how can I delete/remove them from Splunk Cloud ? Thank you. Clecimar
Hello All -  Using version 1.7.6 on Splunk Enterprise 8.2.3: Search Error: Error in 'lookup' command: Script execution failed for external search command '/opt/splunk/etc/apps/TA-user-agents... See more...
Hello All -  Using version 1.7.6 on Splunk Enterprise 8.2.3: Search Error: Error in 'lookup' command: Script execution failed for external search command '/opt/splunk/etc/apps/TA-user-agents/bin/user_agents.py'. | tstats summariesonly=t count from datamodel=Web WHERE (*) sourcetype="websense:cg:kv" Web.mid IN (*) Web.id IN (*) Web.user IN ("**") Web.action IN ("*") Web.src IN ("**") Web.status IN ("*") Web.http_method IN ("*") Web.category IN ("*") Web.dest IN ("***") Web.http_user_agent IN ("**") by Web.http_user_agent | rename Web.* as * | stats sum(count) as "count" by http_user_agent | lookup user_agents http_user_agent | table count ua_family http_user_agent | sort 0 -count From Job Inspector: 08-30-2022 14:30:38.150 ERROR ScriptRunner [53774 StatusEnforcerThread] - stderr from '/opt/splunk/bin/python3.7 /opt/splunk/etc/apps/TA-user-agents/bin/user_agents.py http_user_agent ua_os_family ua_os_major ua_os_minor ua_os_patch ua_os_patch_minor ua_family ua_major ua_minor ua_patch ua_device': File "/opt/splunk/etc/apps/TA-user-agents/bin/user_agents.py", line 54 08-30-2022 14:30:38.150 ERROR ScriptRunner [53774 StatusEnforcerThread] - stderr from '/opt/splunk/bin/python3.7 /opt/splunk/etc/apps/TA-user-agents/bin/user_agents.py http_user_agent ua_os_family ua_os_major ua_os_minor ua_os_patch ua_os_patch_minor ua_family ua_major ua_minor ua_patch ua_device': results = user_agent_parser.Parse(http_user_agent) 08-30-2022 14:30:38.150 ERROR ScriptRunner [53774 StatusEnforcerThread] - stderr from '/opt/splunk/bin/python3.7 /opt/splunk/etc/apps/TA-user-agents/bin/user_agents.py http_user_agent ua_os_family ua_os_major ua_os_minor ua_os_patch ua_os_patch_minor ua_family ua_major ua_minor ua_patch ua_device': ^ 08-30-2022 14:30:38.150 ERROR ScriptRunner [53774 StatusEnforcerThread] - stderr from '/opt/splunk/bin/python3.7 /opt/splunk/etc/apps/TA-user-agents/bin/user_agents.py http_user_agent ua_os_family ua_os_major ua_os_minor ua_os_patch ua_os_patch_minor ua_family ua_major ua_minor ua_patch ua_device': TabError: inconsistent use of tabs and spaces in indentation 08-30-2022 14:30:38.153 ERROR ExternalProvider [53774 StatusEnforcerThread] - Script execution failed for external search command '/opt/splunk/etc/apps/TA-user-agents/bin/user_agents.py'. 08-30-2022 14:30:38.153 ERROR SearchStatusEnforcer [53774 StatusEnforcerThread] - StatusEnforcerThread failed with error: Error in 'lookup' command: Script execution failed for external search command '/opt/splunk/etc/apps/TA-user-agents/bin/user_agents.py'. 08-30-2022 14:30:38.153 INFO ReducePhaseExecutor [53774 StatusEnforcerThread] - ReducePhaseExecutor=1 action=CANCEL 08-30-2022 14:30:38.153 INFO DispatchExecutor [53774 StatusEnforcerThread] - User applied action=CANCEL while status=0 08-30-2022 14:30:38.153 ERROR SearchStatusEnforcer [53774 StatusEnforcerThread] - sid:_amFzb25faG90Y2hraXNzQGFvLnVzY291cnRzLmdvdg_amFzb25faG90Y2hraXNzQGFvLnVzY291cnRzLmdvdg_bmxzX1VJX2Rldg__search32_1661869827.397931_B7BA11EF-467A-4E74-B057-FC9CAC03F269 Error in 'lookup' command: Script execution failed for external search command '/opt/splunk/etc/apps/TA-user-agents/bin/user_agents.py'. Any suggestions on how to fix this?  Thank you.
Hello all, I know this has been asked many different ways but, I cant seem to get the search correct.  I am attempting to "Don't Display Data that is less than 10 days old. I have to set-up a wh... See more...
Hello all, I know this has been asked many different ways but, I cant seem to get the search correct.  I am attempting to "Don't Display Data that is less than 10 days old. I have to set-up a whitelist via a look table, the idea here is we add IP's or URL that show no threat, so want to stop seeing alerts coming in. But - we want to recheck the data again in 10 days. This is my test search, But it still shows IP or URL's in the lookup table.       | from datamodel:"Threat_Intelligence"."Threat_Activity" | search NOT [| inputlookup my_whitelist.csv | fields threat_match_value] | where lastSeen>=relative_time(now(),"-10d") AND _time<=now() | table _time threat_match_value       My look table fields are