All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I have a rex built that when plugged into rex101 works fine, but when applied via a Splunk query, returns a blank result. Text: 2022/02/01 23:07:26.979 [ERROR] [nrfClient.Discovery.nrf] Message sen... See more...
I have a rex built that when plugged into rex101 works fine, but when applied via a Splunk query, returns a blank result. Text: 2022/02/01 23:07:26.979 [ERROR] [nrfClient.Discovery.nrf] Message send failed, response [Type:ABC Http2_Status:404 CauseCode:"CONTEXT_NOT_FOUND" RetryExhausted:true MsgType:1434 ServiceName:nabc SelectedProfileName:"abc-profile" FailureProfile:"FHABC" GroupID:"ABC-*" ]   rex: Http2_Status:\d{3}\sCauseCode:\"(?<Error2>\w+)\"\s   rex101 result: CONTEXT_NOT_FOUND   But when plugged into Splunk, it comes back with a blank result.  
Say I have a batch job that pushes JSON records that look like this on Monday:  {    Department: Engineering    Employee_Number: 4642    Employment_Status: Active    Termination_Date:     Ful... See more...
Say I have a batch job that pushes JSON records that look like this on Monday:  {    Department: Engineering    Employee_Number: 4642    Employment_Status: Active    Termination_Date:     Full_Name: Jane Doe } But on Tuesday A new record gets pushed like this:  {    Department: Engineering    Employee_Number: 4642    Employment_Status: Terminated    Termination_Date: 01/31/2022    Full_Name: Jane Doe } How would I create a search that would compare the "Employment Status" For each record, and only return the records that transitioned to "Terminated"  within the last 2 days?  I tried the following, but it's not working.  index=myinventory sourcetype=HR earliest=-2d@d | eventstats earliest(_time) as earliestEventTime by Employment_Status | dedup FullName, Employment_Status | where Employment_Status!="Active" | table _time, earliestEventTime, FullName, Employment_Status
I have a json data from file generated from the okla speedtest -f json command. I have tried to cast it or eval in different ways but I am doing something wrong.  Error in 'eval' command: Type check... See more...
I have a json data from file generated from the okla speedtest -f json command. I have tried to cast it or eval in different ways but I am doing something wrong.  Error in 'eval' command: Type checking failed. '*' only takes numbers. My search command is:  sourcetype="SpeedTest" | eval dmbs=(download.bandwidth)*8/1000000 | table _time download.bandwidth dmbs And the example json is injected like this: { [-] download: { [-] bandwidth: 10420951 bytes: 81587520 elapsed: 7908 } interface: { [+] } isp: Vivo packetLoss: 0 ping: { [+] } result: { [+] } server: { [+] } timestamp: 2022-02-01T22:00:31Z type: result upload: { [-] bandwidth: 5706691 bytes: 80526240 elapsed: 14946 } }      
I have an automated script that creates a log file that marks the beginning and end of specific events during a web page process that I am wanting to monitor how long each process takes. I have put f... See more...
I have an automated script that creates a log file that marks the beginning and end of specific events during a web page process that I am wanting to monitor how long each process takes. I have put flags in the log file to notate the beginning of a process and the end of that process. OpenPage, Login, Search, Book, CustomerInfo, Seats, ClosePage. I can monitor that a certain process runs and the duration with the transaction command transaction startswith="LoginProcessStart" endswith="LoginProcessEnd" I don't know how to monitor several processes to add to the same table/chart. I have tried using multiple transaction commands, but I get an error stating the preceding search does not guarantee time-ordered events. Any ideas of how to approach this problem? Thank you for any ideas you may have.  
Hi  I have a year data and i want to create a search query to find the week tread. want to highlight week wise trend e.g. activity is high on 3rd week of every month or 1week on most of the months ... See more...
Hi  I have a year data and i want to create a search query to find the week tread. want to highlight week wise trend e.g. activity is high on 3rd week of every month or 1week on most of the months followed by 4th week. I am using is  ("index="index" source in ("source") | timechart span=1w count by activity"), which gives me data weekly for the months. Thanks
I'm working on an indexer to try to forward all data ingested with IT Essentials Work + Splunk Add-on for Unix & Linux to a remote indexer cluster. Until now, that indexer is receiving events into al... See more...
I'm working on an indexer to try to forward all data ingested with IT Essentials Work + Splunk Add-on for Unix & Linux to a remote indexer cluster. Until now, that indexer is receiving events into all itsi_* indexes, but, when I try to setup the forwarding option into that indexer, I cannot set the forwardedindex.n.whitelist and blacklist to forward only the itsi_* indexes to the IDX Cluster. I've try to overwrite all default whitelists and blacklists on local and reset whitelists with itsi_* indexes, but, this still forwarding all indexes, nor only itsi_* indexes. My outputs.conf file is like following: [tcpout] defaultGroup = default-autolb-group forwardedindex.0.whitelist = forwardedindex.1.blacklist = forwardedindex.2.whitelist = forwardedindex.0.whitelist = (itsi_grouped_alerts|itsi_im_meta|itsi_im_metrics|itsi_import_objects|itsi_notable_archive|itsi_notable_audit|itsi_summary|itsi_summary_metrics|itsi_tracked_alerts) indexAndForward = 1 [tcpout:default-autolb-group] disabled = false server = HFtoIDXCluster:9997 useACK = true If I use a "default" config option, overwriting the lists not resetting (not declaring the default 3 lists empty on the tcpout stanza) I have the same behaviour. This is the first time I try to set forwarding options from an indexer. I need to forward this data because it's used for administration of each Splunk instances, and it's required to get into a specific Splunk Enterprise cluster, but, all other indexes it's not required to be forwarded. Have I miss something to specify into config files? Best regards
I have an mvfield  of type string in my results.  I want to search and match all values of this field for words that come from a csv lookup.  LookupRows:  {row1, "mary white"}  {row2, "Tom White"... See more...
I have an mvfield  of type string in my results.  I want to search and match all values of this field for words that come from a csv lookup.  LookupRows:  {row1, "mary white"}  {row2, "Tom White"}  Results Astring="Mary had a white lamb" , "tom had none" , "Joe was here"  Astring="No match here" , "tom thumb"  Searching with the lookup values against the full text of the event is not an acceptable answer. 
When using input link, the default selected input appear like this: Then, when you select any of them, it gets like this: I would like to apply the css style of a selected input link button... See more...
When using input link, the default selected input appear like this: Then, when you select any of them, it gets like this: I would like to apply the css style of a selected input link button to the default when loading the dashboard. I can play with tokens to do this, I just cannot find the applied css. I know it might be found be inspecting the page somehow, but I cannot locate it. I have this run anywhere example: <form> <label>TEST</label> <row> <panel> <html> <style> #button button{ background-color: #F7F8FA !important; margin-right: 10px; } .dashboard-panel, .panel-body.html{ background: #F2F4F5 !important; } </style> <center> TEST </center> </html> </panel> </row> <row> <panel> <input id="button" type="link"> <label></label> <choice value="A">A</choice> <choice value="B">B</choice> <default>A</default> <change> <condition value="A"> <set token="show_pabel_a">true</set> <unset token="show_pabel_b"></unset> </condition> <condition value="B"> <unset token="show_pabel_a"></unset> <set token="show_pabel_b">true</set> </condition> </change> </input> <single depends="$show_pabel_a$"> <search> <progress> <condition match="'job.resultCount' &gt; 0"> <set token="show_panel_a">true</set> </condition> </progress> <query>| makeresults | eval test="A" | fields - _time</query> <earliest>0</earliest> <latest></latest> </search> <option name="drilldown">none</option> </single> <single depends="$show_pabel_b$"> <search> <progress> <condition match="'job.resultCount' &gt; 0"> <set token="show_panel_a">true</set> </condition> </progress> <query>| makeresults | eval test="B"</query> <earliest>0</earliest> <latest></latest> </search> <option name="drilldown">none</option> </single> </panel> </row> </form>  
If you have a dashboard that has a panel with a  search like the one below: | rest splunk_server=* /services/-/-/admin/......../appName/local | table name splunk_server title How can you make it s... See more...
If you have a dashboard that has a panel with a  search like the one below: | rest splunk_server=* /services/-/-/admin/......../appName/local | table name splunk_server title How can you make it so that it searches the other search heads? (a search like the one above returns values for the current search head and its peers - indexers)   
Hello all, we're configuring Splunk Enterprise security app within our environment, while testing alerts  the alert actions for sending email notifications are not working. Checked the internal erro... See more...
Hello all, we're configuring Splunk Enterprise security app within our environment, while testing alerts  the alert actions for sending email notifications are not working. Checked the internal error logs and observed the below. Any idea what is causing this error? ERROR:root:(501, b'Syntax error, parameters in command "mail FROM:<internal server> size=9571" unrecognized or missing' ERROR ScriptRunner - stderr from '/opt/splunk/bin/python3.7 /opt/splunk/etc/apps/search/bin/sendemail.py Thank you.
We are getting the Caused by: java.io.InterruptedIOException: timeout exception in the logs. from the server. However the server is not giving the response code to us.  I am looking for the standard... See more...
We are getting the Caused by: java.io.InterruptedIOException: timeout exception in the logs. from the server. However the server is not giving the response code to us.  I am looking for the standard practice for the timeout monitoring to be followed in Splunk to plot the graph for number of timeouts being received per second. Shall we add the error code like 408 in the exception at client side or we should plot the graph for on basis of text "timeout"  count with over the time.  Looking for the assistance and suggestions.
Hello community. I have an on-premises installation of Phantom, The database is running on the same server. When I go to Administration>System Health, all the tables are empty. There are no informa... See more...
Hello community. I have an on-premises installation of Phantom, The database is running on the same server. When I go to Administration>System Health, all the tables are empty. There are no information about memory, load, disk and the status of the services is Unknown. Phantom is working properly so my guess is that there is an issue with the graphs. I have tried to restart the server but no result. This is what i see in System Health tab:   What could be the problem here? Thank you.
hi this css class dont return results what is wrong please? <row depends="$STYLES$"> <panel> <html> <style> .intro { background-color: yellow; } ... See more...
hi this css class dont return results what is wrong please? <row depends="$STYLES$"> <panel> <html> <style> .intro { background-color: yellow; } </style> </html> </panel> </row> <row> <panel> <html> <p class="intro">TUTU.</p> </html> </panel> </row>
Hello, i have checkboxes that will serve as filters. Now i want to color code the Text next to the checkbox NOT the Label on top. I already got that working: #input_severity_low { text-shadow: 1px... See more...
Hello, i have checkboxes that will serve as filters. Now i want to color code the Text next to the checkbox NOT the Label on top. I already got that working: #input_severity_low { text-shadow: 1px 1px 2px black, 0 0 25px green, 0 0 5px darkgreen; font-variant: small-caps; } Howeverthis really only affects the label over the checkbox. I want the Text of (next to) the checkbox to be altered. My Web-Analyzer shows me something like: <label data-test="label" for="clickable-ae3424f7-85da-4201-9152-a98bf237f15d" data-size="medium" class="SwitchStyles__StyledLabel-tie4e6-7 hGDbnW"> 4 - Low (<some number>)</label> However the CSS Syle does not react to the class. Anyone any ideas?   Kind regards, Mike
Hi all, I have a token "range" which is in the format 0-2, 2-5, 5-10, 10-100 .. I am splitting it by "-" and saving the the values as "minor" and "major". When i try to use those values in the query... See more...
Hi all, I have a token "range" which is in the format 0-2, 2-5, 5-10, 10-100 .. I am splitting it by "-" and saving the the values as "minor" and "major". When i try to use those values in the query i am not able to get the results. The query is as follows. search index= "abc" sourcetype="xyz"| eval range = "$time$"| eval temp=split(range,"-")| eval minor=mvindex(temp,0)| eval major=mvindex(temp,1)|search duration>minor AND duration<=major| table task duration URL I am not able to display the table. Can anyone please help me in this.
I inherited an old splunk environment where all data was indexed into the main index. I have setup a new environment with multiple indexes and some parsing rules on a heavy forwarder (These configs w... See more...
I inherited an old splunk environment where all data was indexed into the main index. I have setup a new environment with multiple indexes and some parsing rules on a heavy forwarder (These configs work perfectly with the universal forwarders I have deployed).  How would I forward the data from the original main index, into the heavy forwarder for redistribution into the new indexes?
Hi, I am trying to configure PaloAlto logs via the Splunk Connect for Syslog. I followed the instructions here  https://splunk.github.io/splunk-connect-for-syslog/main/sources/PaloaltoNetworks/ I... See more...
Hi, I am trying to configure PaloAlto logs via the Splunk Connect for Syslog. I followed the instructions here  https://splunk.github.io/splunk-connect-for-syslog/main/sources/PaloaltoNetworks/ I configured the syslog at PaloAlto according the instructions. I also c I can see the syslog connections arriving to the host from the firewall using the command tcpdump port 514. Add the following lines to splunk_metadata.csv   pan_config,index,test pan_correlation,index,test pan_globalprotect,index,test pan_hipmatch,index,test pan_log,index,test pan_system,index,test pan_threat,index,test pan_traffic,index,test pan_userid,index,test       And restart sc4s   systemctl restart sc4s   I checked the index test and it is empty. I enabled the debug by adding with the line in the env_file   SC4S_DEST_GLOBAL_ALTERNATES=d_hec_debug   and it seems like the index defined in the spplunk_metadata.csv is not taken, instead osnix is used.   curl -k -u "sc4s HEC debug:$SC4S_DEST_SPLUNK_HEC_DEFAULT_TOKEN" "https://splunk.XX.XXX.XXu:8088/services/collector/event" -d '{"time":"1643726324.000","sourcetype":"nix:syslog","source":"program:","index":"osnix","host":"atlas-fw-01.XXX.XX.XX","fields":{"sc4s_vendor_product":"nix_syslog","sc4s_syslog_severity":"info","sc4s_syslog_format":"rfc5424_strict","sc4s_syslog_facility":"user","sc4s_proto":"UDP","sc4s_loghost":"xxxxxxxxxx","sc4s_fromhostip":"192.168.10.100","sc4s_destport":"514","sc4s_container":"xxxxxxxx"},"event":"2022-02-01T14:38:44.000+00:00 atlas-fw-01.xxx.xxx.xxx - - - - 1,2022/02/01 15:38:43,011901021137,TRAFFIC,end,2561,2022/02/01 15:38:43,192.168.20.63,157.240.27.54,154.14.118.254,157.240.27.54,Normal traffic,xxx\\yyy,,quic,vsys1,Internal,External,ae1,ae2.6,Splunk,2022/02/01 15:38:43,113676,1,56081,443,49985,443,0x400019,udp,allow,7358,2250,5108,19,2022/02/01 15:36:43,0,any,,7030011678692056750,0x0,192.168.0.0-192.168.255.255,Germany,,7,12,aged-out,0,0,0,0,,atlas-fw-01,from-policy,,,0,,0,,N/A,0,0,0,0,c8250554-4ccd-46e3-8498-e74cfe9cdd10,0,0,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,2022-02-01T15:38:44.130+01:00,,,infrastructure,networking,browser-based,1,tunnel-other-application,,quic,no,no,0"}'    I already check and the HEC token is allowed to index test. Could someone tell me what is happening? thanks
I have been asked to start monitoring several Windows servers for computer consumption i.e. CPU and memory consumption. I'm looking at 15 second sample intervals. A metrics index is the natural place... See more...
I have been asked to start monitoring several Windows servers for computer consumption i.e. CPU and memory consumption. I'm looking at 15 second sample intervals. A metrics index is the natural place to place this. I'm wondering how much license it will consume per system? I was thinking the following approach might work and I'd be interested in peer review (not that I can say I'm a peer of many on this forum given my noob level at Splunk! - Use a VM with 1 vCPU and a 15s sample interval for CPU and Memory and sent to a dedicated metrics index Collect for 12 hrs View license consumption I then could say that adding another vCPU to a system would require X amount in licensing based on the same sample interval. Is this reasonable? 
Hi,   I'm trying to exclude events from the time range.     index = _internal | eval Hour=strftime(_time,"%H") | eval Minute=strftime(_time,"%M") | eval DayofWeek=strftime(_time,"%w") | eval Mo... See more...
Hi,   I'm trying to exclude events from the time range.     index = _internal | eval Hour=strftime(_time,"%H") | eval Minute=strftime(_time,"%M") | eval DayofWeek=strftime(_time,"%w") | eval Month=strftime(_time,"%m") | eval WeekOfYear=strftime(_time,"%U") | search NOT DayofWeek=3 AND Hour>10 Hour<13   from the above query trying to exclude Wednesday and in between 10 to 13, but it excludes all the day. Can anyone have suggestions? Have one more scenario, need to exclude Monday and Wednesday particular hours.
Hi,  i already did some research but seems our case is a bit special: We colllect inventory and performance data from our vCenters with the Add-On for VMware (Splunk_TA_vmware) in version 4.0.2. Th... See more...
Hi,  i already did some research but seems our case is a bit special: We colllect inventory and performance data from our vCenters with the Add-On for VMware (Splunk_TA_vmware) in version 4.0.2. The heavy forwarder running this TA is also the DCN.  I am not able to restrict data collection of performance data with the given options.  The interval can be set to higher or lower value but the data gathered from the worker is still the same. It collects all since the last input.  Due to the fact, that we dont need performance data every 20 seconds - i would prefer an 1 hour average event . Or if not possible 1 Events per 30 minutes with the latest values.  Is there a way to achieve this? Doesn't matter if it is by design or splunk workaround.  Example raw data: vm-44 500c714d-861b-2f53-1f7f-16d8e72c4e28 aggregated 20 0.04 0.04 2.79 2.73 389 410