All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I am trying to create a timechart overlay of blocked  traffic comparted to total traffic with the following search:   | tstats count AS "Total Traffic" from datamodel=Network_Traffic where (nodenam... See more...
I am trying to create a timechart overlay of blocked  traffic comparted to total traffic with the following search:   | tstats count AS "Total Traffic" from datamodel=Network_Traffic where (nodename = All_Traffic ) OR (nodename = Blocked_Traffic) All_Traffic.src_zone=INTERNET-O groupby _time span=1d, All_Traffic.src_zone, All_Traffic.action, All_Traffic.Traffic_By_Action.Blocked_Traffic prestats=true | `drop_dm_object_name("All_Traffic")` | timechart span=1d count by action | eval "Block Avg" = round('blocked'*100/('allowed'+'blocked'),2)    This search has two issues: Timechart shows bars by action and 'd like to see just the total count of network sessions The average is basically flatlined as it's at roughly 40% whereas my totals by action are roughly 1.5B
On Splunk Enterprise on prem 9.0.1, after using Smart Forecasting (MLTK 5.3.1), and publishing the recently created model, the apply command returns error "Error in 'apply' command: list assignment i... See more...
On Splunk Enterprise on prem 9.0.1, after using Smart Forecasting (MLTK 5.3.1), and publishing the recently created model, the apply command returns error "Error in 'apply' command: list assignment index out of range". But the same model created using SPL with fit command works fine. Here is the traceback: 11-07-2022 15:17:42.289 ERROR ChunkedExternProcessor [50291 ChunkedExternProcessorStderrLogger] - stderr: Traceback (most recent call last): 11-07-2022 15:17:42.289 ERROR ChunkedExternProcessor [50291 ChunkedExternProcessorStderrLogger] - stderr: File "/opt/splunk/etc/apps/Splunk_ML_Toolkit/bin/processors/ApplyProcessor.py", line 158, in apply 11-07-2022 15:17:42.289 ERROR ChunkedExternProcessor [50291 ChunkedExternProcessorStderrLogger] - stderr: prediction_df = algo.apply(df, process_options) 11-07-2022 15:17:42.289 ERROR ChunkedExternProcessor [50291 ChunkedExternProcessorStderrLogger] - stderr: File "/opt/splunk/etc/apps/Splunk_ML_Toolkit/bin/algos/StateSpaceForecast.py", line 712, in apply 11-07-2022 15:17:42.289 ERROR ChunkedExternProcessor [50291 ChunkedExternProcessorStderrLogger] - stderr: df = self.add_output_metadata(df) 11-07-2022 15:17:42.289 ERROR ChunkedExternProcessor [50291 ChunkedExternProcessorStderrLogger] - stderr: File "/opt/splunk/etc/apps/Splunk_ML_Toolkit/bin/algos/StateSpaceForecast.py", line 353, in add_output_metadata 11-07-2022 15:17:42.289 ERROR ChunkedExternProcessor [50291 ChunkedExternProcessorStderrLogger] - stderr: metadata[i] = 'f' 11-07-2022 15:17:42.289 ERROR ChunkedExternProcessor [50291 ChunkedExternProcessorStderrLogger] - stderr: IndexError: list assignment index out of range 11-07-2022 15:17:42.289 ERROR ChunkedExternProcessor [50291 ChunkedExternProcessorStderrLogger] - stderr: WARNING Error while applying model "modelo4": list assignment index out of range 11-07-2022 15:17:42.289 ERROR ChunkedExternProcessor [50291 ChunkedExternProcessorStderrLogger] - stderr: list assignment index out of range 11-07-2022 15:17:42.290 ERROR ChunkedExternProcessor [50291 ChunkedExternProcessorStderrLogger] - stderr: Traceback (most recent call last): 11-07-2022 15:17:42.290 ERROR ChunkedExternProcessor [50291 ChunkedExternProcessorStderrLogger] - stderr: File "/opt/splunk/etc/apps/Splunk_ML_Toolkit/bin/processors/ApplyProcessor.py", line 158, in apply 11-07-2022 15:17:42.290 ERROR ChunkedExternProcessor [50291 ChunkedExternProcessorStderrLogger] - stderr: prediction_df = algo.apply(df, process_options) 11-07-2022 15:17:42.290 ERROR ChunkedExternProcessor [50291 ChunkedExternProcessorStderrLogger] - stderr: File "/opt/splunk/etc/apps/Splunk_ML_Toolkit/bin/algos/StateSpaceForecast.py", line 712, in apply 11-07-2022 15:17:42.290 ERROR ChunkedExternProcessor [50291 ChunkedExternProcessorStderrLogger] - stderr: df = self.add_output_metadata(df) 11-07-2022 15:17:42.290 ERROR ChunkedExternProcessor [50291 ChunkedExternProcessorStderrLogger] - stderr: File "/opt/splunk/etc/apps/Splunk_ML_Toolkit/bin/algos/StateSpaceForecast.py", line 353, in add_output_metadata 11-07-2022 15:17:42.290 ERROR ChunkedExternProcessor [50291 ChunkedExternProcessorStderrLogger] - stderr: metadata[i] = 'f' 11-07-2022 15:17:42.290 ERROR ChunkedExternProcessor [50291 ChunkedExternProcessorStderrLogger] - stderr: IndexError: list assignment index out of range 11-07-2022 15:17:42.290 ERROR ChunkedExternProcessor [50291 ChunkedExternProcessorStderrLogger] - stderr: During handling of the above exception, another exception occurred: 11-07-2022 15:17:42.290 ERROR ChunkedExternProcessor [50291 ChunkedExternProcessorStderrLogger] - stderr: Traceback (most recent call last): 11-07-2022 15:17:42.290 ERROR ChunkedExternProcessor [50291 ChunkedExternProcessorStderrLogger] - stderr: File "/opt/splunk/etc/apps/Splunk_ML_Toolkit/bin/cexc/__init__.py", line 174, in run 11-07-2022 15:17:42.290 ERROR ChunkedExternProcessor [50291 ChunkedExternProcessorStderrLogger] - stderr: while self._handle_chunk(): 11-07-2022 15:17:42.290 ERROR ChunkedExternProcessor [50291 ChunkedExternProcessorStderrLogger] - stderr: File "/opt/splunk/etc/apps/Splunk_ML_Toolkit/bin/cexc/__init__.py", line 236, in _handle_chunk 11-07-2022 15:17:42.290 ERROR ChunkedExternProcessor [50291 ChunkedExternProcessorStderrLogger] - stderr: ret = self.handler(metadata, body) 11-07-2022 15:17:42.290 ERROR ChunkedExternProcessor [50291 ChunkedExternProcessorStderrLogger] - stderr: File "/opt/splunk/etc/apps/Splunk_ML_Toolkit/bin/apply.py", line 136, in handler 11-07-2022 15:17:42.290 ERROR ChunkedExternProcessor [50291 ChunkedExternProcessorStderrLogger] - stderr: self.controller.execute() 11-07-2022 15:17:42.290 ERROR ChunkedExternProcessor [50291 ChunkedExternProcessorStderrLogger] - stderr: File "/opt/splunk/etc/apps/Splunk_ML_Toolkit/bin/chunked_controller.py", line 220, in execute 11-07-2022 15:17:42.290 ERROR ChunkedExternProcessor [50291 ChunkedExternProcessorStderrLogger] - stderr: self.processor.process() 11-07-2022 15:17:42.290 ERROR ChunkedExternProcessor [50291 ChunkedExternProcessorStderrLogger] - stderr: File "/opt/splunk/etc/apps/Splunk_ML_Toolkit/bin/processors/ApplyProcessor.py", line 177, in process 11-07-2022 15:17:42.290 ERROR ChunkedExternProcessor [50291 ChunkedExternProcessorStderrLogger] - stderr: self.df = self.apply(self.df, self.algo, self.process_options) 11-07-2022 15:17:42.290 ERROR ChunkedExternProcessor [50291 ChunkedExternProcessorStderrLogger] - stderr: File "/opt/splunk/etc/apps/Splunk_ML_Toolkit/bin/processors/ApplyProcessor.py", line 166, in apply 11-07-2022 15:17:42.290 ERROR ChunkedExternProcessor [50291 ChunkedExternProcessorStderrLogger] - stderr: raise RuntimeError(e) 11-07-2022 15:17:42.290 ERROR ChunkedExternProcessor [50291 ChunkedExternProcessorStderrLogger] - stderr: RuntimeError: list assignment index out of range   Anyone managed to get the publish option in MLTK working?
Index=dev log-severity=INFO app name=abcd | rex “tv counts for indicator S = (?<Count>\d+)” | stats count by _time, Counts l table _time, counts   I have two queries separately  1) tv cou... See more...
Index=dev log-severity=INFO app name=abcd | rex “tv counts for indicator S = (?<Count>\d+)” | stats count by _time, Counts l table _time, counts   I have two queries separately  1) tv counts for indicator S= (?<Count>\d+) 2) Dishtv counts for indicator S= (?<Count>\d+) Both of the counts are combined because they’re having same wordings  (tv counts for indicator S= (?<Count>\d+), spark Data frame that generates these 1 and 2 queries is different.they have different output counts but in graphs they are overlapping because of same logger messages wording. how can I get separate counts for each of them separately  pls suggest     
Hi all, i have a established query which is working fine. But when i try to add the inputlookup to the query, its not working. i am using a federated search.  My need is to configure a maintenanc... See more...
Hi all, i have a established query which is working fine. But when i try to add the inputlookup to the query, its not working. i am using a federated search.  My need is to configure a maintenance table as a csv lookup  and refer to it in the query.  when i try to access the csv file via inputlookup, i get error.  can you please suggest is there a way to configure maintenance for a particular backend via lookup table and refer to it in the query. i want to exclude the backend host for a particular date and time.  Query below: index="federated:XXX"  ("HTTP response code" OR "url-open" OR "Host connection failed")  NOT "HTTP response code 2**" | rex field=_raw "https://(?<backend>.*)\:" | rex field=_raw "gtid\(\w{1,24}\): (?<error>.*)"| rex field=_raw "^<\d+>(?P<date>\d+\-\d+\-\d+\w+:\d+:\d+\.\d+)[^ \n]* (?P<host>\w+)\s+\[(?P<domain>[^\]]+)" | eval thresholdValue = case(backend=="******" AND domain=="*****", 500, backend=="abcd.com" AND domain!="abcd-ALERTS", 350, backend=="ertyu.com" AND domain=="ertyu", 1000, backend!="qwerty.com", 100) | stats count by domain,backend,error,source,thresholdValue | sort -count | where count>thresholdValue | eval Priority=if(count>200,"3","4") | eval createINCTicket="0" | table domain,backend,error,source,thresholdValue,Priority,count,createINCTicket | lookup incsearch DOMAIN AS domain URL AS backend OUTPUT APPCODE AS BackendAppcode CREATETICKET AS CT INCIDENT AS incident   Maintenance csv lookup  maint_backend maint_domain date_hour_start date_hour_end date_mday_start date_mday_end abcd.com abcd-abcd 1 3 6 7
I have 3 date columns.I have already calculated the difference between current day and the diff is in days are the values in the 3 columns.   Col1 Col2 Col3 12   7 2 34 ... See more...
I have 3 date columns.I have already calculated the difference between current day and the diff is in days are the values in the 3 columns.   Col1 Col2 Col3 12   7 2 34 45 15 25   250 56 120 21     Required filter : - i have  to filter only days <=40 in all 3 columns. - If a column has null and other 2 columns have values <=40 then they need to be shown -if a column or 2 column has null and rest other column has value <=40 they need to displayed -if a column is null and other column values are greater >40 then they need to removed from scope. Please let me know the search .    
I am trying to only include dest_ip  in my search if action is not "blocked.  These are the input panels:     <input type="dropdown" token="my_action" searchWhenChanged="true"> <label>Act... See more...
I am trying to only include dest_ip  in my search if action is not "blocked.  These are the input panels:     <input type="dropdown" token="my_action" searchWhenChanged="true"> <label>Action</label> <choice value="*">any</choice> <choice value="allowed">allowed</choice> <choice value="blocked">blocked</choice> <prefix>action=</prefix> <change> <condition label="blocked"> <unset token="is_not_blocked"></unset> </condition> <condition label="allowed"> <set token="is_not_blocked">true</set> </condition> <condition label="*"> <set token="is_not_blocked">true</set> </condition> </change> <default>*</default> </input> <input type="text" token="my_dest_ip" searchWhenChanged="true" depends="$is_not_blocked$"> <label>Destination IP address (CIDR okay)</label> <default>*</default> <prefix>dest_ip=</prefix> <initialValue>*</initialValue> </input>     This is the search:     <panel> <title>Network Connections by Source</title> <table> <title>Count of network connections by source - click on a line for list of sessions from that source</title> <search> <query>index=proxy $my_host$ $my_src_ip$ $my_dest_ip$ $my_url$ $my_action$ | lookup dnslookup clientip as src_ip OUTPUT clienthost as Host | stats count by src_ip Host action | table src_ip, Host action count | sort -count | rename src_ip as "Source_IP" action as Action count as "Count"</query> <earliest>$time_range.earliest$</earliest> <latest>$time_range.latest$</latest> <sampleRatio>1</sampleRatio> </search> <option name="count">20</option> <option name="dataOverlayMode">none</option> <option name="percentagesRow">false</option> <option name="refresh.display">progressbar</option> <option name="rowNumbers">false</option> <option name="totalsRow">false</option> <option name="wrap">true</option> <drilldown> <set token="drill_client_ip">$row.Source_IP$</set> <set token="drill_url">*</set> <set token="drill_dest_ip">*</set> <set token="drill_action">$row.Action$</set> </drilldown> </table> </panel>     The input panel for my_dest_ip disappears when I select "blocked" in the action panel, but the search still includes dest_ip=*.  What am I not understanding?
Hello, Currently, our client receives daily emails with the data from the CSV file embedded in the email. Is there a Splunk-only process for encrypting the email after embedding; or, encrypting the... See more...
Hello, Currently, our client receives daily emails with the data from the CSV file embedded in the email. Is there a Splunk-only process for encrypting the email after embedding; or, encrypting the attached CSV file? We know how to encrypt at the OS level, but want to know if can be done via Splunk only. Example CSV file Name  Address State...... John Smith 123 B'way, NY Betty Boop 456 Main Street, NJ etc... Thanks in advance and God bless, Genesius
Hi, I have generated a search which return list of hosts and the count of events for these host. sometime the host values returned as IP Address and others as Host Name. I have a lookup table which... See more...
Hi, I have generated a search which return list of hosts and the count of events for these host. sometime the host values returned as IP Address and others as Host Name. I have a lookup table which contains a list of all IP Addresses and Host Names in addition to other metadata information. so the result of the search is something like : Host1                     100 192.168.0.2         110 Host3                      120   and the lookup table something like: Host1        192.168.0.1         App1         Owner1 Host2        192.168.0.2         App2         Owner2 Host3        192.168.0.3         App3         Owner3   I need to lookup for host value (IP or Server Name) returned in the search result and return all the metadata associated with that value.
Hello,   I have this search results:       Error for user flow: AAAAA - user: BBBB - Msg: {\"_errorCode\":Z, \"_message\": \"Example Error Message\"}       I'm trying to get the n... See more...
Hello,   I have this search results:       Error for user flow: AAAAA - user: BBBB - Msg: {\"_errorCode\":Z, \"_message\": \"Example Error Message\"}       I'm trying to get the number of each each _errorCode for each user flow. I started with      index="example_index" source="example_source" sourcetype="example_st" Error for | rex field=_raw "user flow: (?<user_flow>\w+)" | stats count as ErrorCount by user_flow       I was able to get the number of error occurrences under each user flow. I wanted to expand this query to be more granular and include the error code so I would have: UserFlow ErrorCode Error Count AAAA X 5 AAAA Y 7 BBBB F 1 BBBB G 2   This is the query I came up with but the statistics tab are no longer showing anything     index="example_index" source="example_source" sourcetype="example_st" Error for | rex field=_raw "user flow: (?<user_flow>\w+)" | rex field=_raw "_errorCode:\\\":(?<error_code>\d+)" |stats count as ErrorCount by user_flow, error_code     I see the events tab are still populated with search results  but it looks like my addition to the query is not quite correct.
Hi, I am using the phantom ova to run my Phantom instance. I have just managed to run my playbooks when I previously tested it 8 hours ago. However upon creating a new simple playbook and running the... See more...
Hi, I am using the phantom ova to run my Phantom instance. I have just managed to run my playbooks when I previously tested it 8 hours ago. However upon creating a new simple playbook and running the previously created playbook, I get the following error: Error updating playbook.<br/>cannot mmap an empty file   Hence I am unable to save any progress on any playbooks now. I had tried search online for solutions but am unable to do so. I had come across an article (i forgot the link) that had stated the commands /opt/phantom/bin/stop_phantom.sh and /opt/phantom/bin/start_phantom.sh to restart the phantom ova instance however it is not having any effect. I attempted to restart the phantom service a few times, and restarted the vm a few times, but it does not seem to work. I then attempted to delete the VM from disk and reimport it, and the playbooks work fine until after a while and the cycle repeats itself... While reimporting the vm "works", it is troublesome to reconfigure my current settings on the reimported instance every time I encounter this error. Is there a better solution to this?   As seen from the image, this 2nd playbook is a simple one, and the first playbook one I could run is also similar. Both playbooks have been configured and saved before I saved the virtualbox vm state as I switched to other matters, and when I resume the vm, I'll get this error. Please help, thank you very much!
I'm able to change the font size for the entire  dashboard not for the single table, my dashboard consist of multiple panel(tables), if I'm trying to increase the font size of text for single table i... See more...
I'm able to change the font size for the entire  dashboard not for the single table, my dashboard consist of multiple panel(tables), if I'm trying to increase the font size of text for single table it is getting changed for complete panel, I want to change the font size of my main panel which should be bigger than the remaining.
I have been using the Universal  forwarder splunkforwarder-7.2.6-c0bf0f679ce9-Linux-x86_64 for quite a while without issues. I now wanted to upgrade to the latest one, 9.0.2 so I downloaded it and ra... See more...
I have been using the Universal  forwarder splunkforwarder-7.2.6-c0bf0f679ce9-Linux-x86_64 for quite a while without issues. I now wanted to upgrade to the latest one, 9.0.2 so I downloaded it and ran it just like I did with the old version. However, when starting it,  ${SPLUNK_HOME}/bin/splunk start --accept-license --answer-yes --no-prompt   It seems to crash with   Error calling execve(): No such file or directory Error launching command: Invalid argument   I then tried the latest 8.x.x version, 8.2.9 and that worked perfectly fine.   What has changed between version 8 and 9? Any new requirements I am not aware of?
I have a dashboard that uses a dbxquery in the base search.  I would like to make the dashboard "bilingual". Is it possible to alter the behavior of the dashboard and select a different base search ... See more...
I have a dashboard that uses a dbxquery in the base search.  I would like to make the dashboard "bilingual". Is it possible to alter the behavior of the dashboard and select a different base search depending on the value of a drop down or radio button? For example, selecting the first of the two options should have this base search be used: <search id="base1"> <query>| dbxquery shortnames=true output=csv connection="CON_1" query="use [DB1] select TimeRaised as 'TimeTriggered', ...</query> ...while selecting the 2nd of the two options would use this one: <search id="base1"> <query>| dbxquery shortnames=true output=csv connection="CON_2" query="use [DB2] select TimeRaised as 'TimeTriggered', ...</query>
Hello, can anyone tell me why this configuration isn’t working? I would like to change index name from main to hue, I’m getting data from db_connect from HF. I would like to change the index name o... See more...
Hello, can anyone tell me why this configuration isn’t working? I would like to change index name from main to hue, I’m getting data from db_connect from HF. I would like to change the index name on main indexer.   transforms.conf [set_index_hue] SOURCE_KEY = MetaData:Source REGEX = ^source::(stream\:Splunk_Postgres)$ DEST_KEY = _MetaData:Index FORMAT = hue   props.conf   [stream:postgres] TRANSFORMS-stream-postgres = set_index_hue   Best regards M.
  Index=dev log-severity=INFO app name=abcd | rex “tv counts for indicator S = (?&lt;Count&gt;\d+)” | stats count by _time, Counts l table _time, counts     Getting error in Rex commandregex:synt... See more...
  Index=dev log-severity=INFO app name=abcd | rex “tv counts for indicator S = (?&lt;Count&gt;\d+)” | stats count by _time, Counts l table _time, counts     Getting error in Rex commandregex:syntax error in sunpattern name(missing terminator) it worked last week and suddenly this error is showing up and checked data side ,data is there and no issues with data please suggest    
The changes of the data source are not immediately reflected and some old information remains for several minutes. How the content updates works? cron ? or Or is each data source combined and retur... See more...
The changes of the data source are not immediately reflected and some old information remains for several minutes. How the content updates works? cron ? or Or is each data source combined and returned with each inputlookup reference?  Or this depend on the environment use.. Clustering? e.g. whether synchronization between search heads takes time and a time lag exists in the reflection of the results.
On an existing dashboard I have a rather complex query that generates a timechart on which I am looking to use annotations to highlight threshold breaches. Is there any way to avoid having to run th... See more...
On an existing dashboard I have a rather complex query that generates a timechart on which I am looking to use annotations to highlight threshold breaches. Is there any way to avoid having to run the same query twice (once to create the initial chart, and a second time for the annotations). Oh -- [I think I have may be answering my own question,] is the answer here going to be to use a base search? Thanks.  
I am using Splunk Distribution of OpenTelemetry Collector in kubernetes. Current solution is working just fine. But after I added section for the smartagent/jmx receiver with groovy script inside, he... See more...
I am using Splunk Distribution of OpenTelemetry Collector in kubernetes. Current solution is working just fine. But after I added section for the smartagent/jmx receiver with groovy script inside, healthcheck starts to show "Server not available" status.  Groovy script works, logs includes several 2022-11-07T11:12:14.730Z error subproc/core.go:114 Get result, and sent:0 {"kind": "receiver", "name": "smartagent/jmx", "pipeline": "metrics", "monitorID": "smartagentjmx", "monitorType": "jmx", "runnerPID": 42} (I just added stderr to script) There are not any other warn/errors in logs. Kubernetes just kills pod cause healthcheck.  My jmx config: smartagent/jmx: type: jmx host: 0.0.0.0 port: 9999 intervalSeconds: 2 groovyScript: | def printErr = System.err.&println ss = util.queryJMX("com.hazelcast:name=MAP_NAME,instance=*,type=IMap").first() dims = [env_name: "NAME"] output.sendDatapoint(util.makeGauge("hazelcast.map.size", ss.size, dims)) printErr("Get result, and sent:" + ss.size) Tell me where and how to dig? 
I am using the following rex command to extract an id number, which is in the following format: 1e4gd5g7-4fy6-fg567-3d46-3gth63f57h35. I am also using the rex command to extract email addresses. Howe... See more...
I am using the following rex command to extract an id number, which is in the following format: 1e4gd5g7-4fy6-fg567-3d46-3gth63f57h35. I am also using the rex command to extract email addresses. However, it seems to extract the wrong information, let me show you: index=keycloak "MFA" | regex _raw="MFA challenge failed" | rex "(?i) is (?P<keycloak_id>[^\"]+)" | rex "(?i) is (?P<email_address>.+?)\.\s+" | table Account_ID, email_address, keycloak_id, _time However, this is the output that I get: Account_ID email_address keycloak_id _time aaaaaaa 'OTP is invalid' 'OTP is invalid'. Keycloak session id is 1e4gd5g7-4fy6-fg567-3d46-3gth63f57h35 2022-11-07 09:56:17.00   I'm really struggling to properly extract the right information that I'm looking for. Any help would be greatly appreciated
I am looking for an alert when any search in (rest /services/saved/searches splunk_server=local) is being modified.