All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Good day All! i have created a lookup data | inputlookup Autosys.csv and i have fileds KB,REGION AND JOB_NAME. i have splunk search which i have some job data . how can ingest other fields in t... See more...
Good day All! i have created a lookup data | inputlookup Autosys.csv and i have fileds KB,REGION AND JOB_NAME. i have splunk search which i have some job data . how can ingest other fields in the lookup using JOB_NAME as common fileds splunk events below is the search which i want to add lookup data   index=index_name sourcetype=source_name | eval Actualstarttime=strftime(strptime(NEXT_START,"%Y/%m/%d %H:%M:%S"),"%H:%M") | eval Job_start_by=strftime(strptime(LAST_START,"%Y/%m/%d %H:%M:%S"),"%H:%M") | eval START_SLA=if(Job_start_by <= Actualstarttime,"Started On Time","Started Late") | eval END_SLA=if(RUNTIME <= AVG_RUN_TIME,"END ONTIME","END SLA BREACH") | search NEXT_START!=NULL | table JOB_NAME,JOB_GROUP,TIMEZONE,STATUS,Currenttime,STATUS_TIME,LAST_START,LAST_END,NEXT_START,DAYS_OF_WEEK,EXCLUDE_CALENDAR,RUNTIME,Actualstarttime,Job_start_by,START_SLA,AVG_RUN_TIME,END_SLA  
Hello everyone, currently our Indexers keep crashing randomly.  We're only running Linux OS, within Splunk 9.0.2. Any suggestions what the Crashing thread means and how to solve that? Thank you... See more...
Hello everyone, currently our Indexers keep crashing randomly.  We're only running Linux OS, within Splunk 9.0.2. Any suggestions what the Crashing thread means and how to solve that? Thank you. Received fatal signal 6 (Aborted) on PID 235655. Cause: Signal sent by PID 235655 running under UID 1018. Crashing thread: FwdDataReceiverThread Registers: RIP: [0x00007F4A05C3E387] gsignal + 55 (libc.so.6 + 0x36387) RDI: [0x0000000000039887] RSI: [0x00000000000399C9] RBP: [0x000000000000008F] RSP: [0x00007F49E4FFE238] RAX: [0x0000000000000000] RBX: [0x000055B8710F5CA8] RCX: [0xFFFFFFFFFFFFFFFF] RDX: [0x0000000000000006] R8: [0x00007F49E4FFF700] R9: [0x00007F4A05C552CD] R10: [0x0000000000000008] R11: [0x0000000000000206] R12: [0x000055B870FE5A93] R13: [0x000055B8710F5D88] R14: [0x000055B872226488] R15: [0x00007F49E4FFE4E0] EFL: [0x0000000000000206] TRAPNO: [0x0000000000000000] ERR: [0x0000000000000000] CSGSFS: [0x0000000000000033] OLDMASK: [0x0000000000000000] OS: Linux Arch: x86-64 Backtrace (PIC build): [0x00007F4A05C3E387] gsignal + 55 (libc.so.6 + 0x36387) [0x00007F4A05C3FA78] abort + 328 (libc.so.6 + 0x37A78) [0x000055B86E1D4D26] ? (splunkd + 0x1A08D26) [0x000055B86EE39BD2] _ZN26HealthDistIngestionLatency29calculateAndUpdateHealthColorEv + 914 (splunkd + 0x266DBD2) [0x000055B86E744627] _ZN22TcpInPipelineProcessor7processER15CowPipelineData + 199 (splunkd + 0x1F78627) [0x000055B86E74CD57] _ZN14FwdDataChannel16s2sDataAvailableER15CowPipelineDataRK15S2SPerEventInfom + 167 (splunkd + 0x1F80D57) [0x000055B86F2B2255] _ZN11S2SReceiver11finishEventEv + 261 (splunkd + 0x2AE6255) [0x000055B86F059E48] _ZN18StreamingS2SParser5parseEPKcS1_ + 6520 (splunkd + 0x288DE48) [0x000055B86E73E004] _ZN16CookedTcpChannel7consumeER18TcpAsyncDataBuffer + 244 (splunkd + 0x1F72004) [0x000055B86E74055D] _ZN16CookedTcpChannel13dataAvailableER18TcpAsyncDataBuffer + 45 (splunkd + 0x1F7455D) [0x000055B86F592D03] _ZN10TcpChannel11when_eventsE18PollableDescriptor + 531 (splunkd + 0x2DC6D03) [0x000055B86F4D5BCC] _ZN8PolledFd8do_eventEv + 124 (splunkd + 0x2D09BCC) [0x000055B86F4D6B39] _ZN9EventLoop3runEv + 617 (splunkd + 0x2D0AB39) [0x000055B86F58D68C] _ZN19Base_TcpChannelLoop7_do_runEv + 28 (splunkd + 0x2DC168C) [0x000055B86F58D78E] _ZN25SubordinateTcpChannelLoop3runEv + 222 (splunkd + 0x2DC178E) [0x000055B86F59A16D] _ZN6Thread37_callMainAndDiscardTerminateExceptionEv + 13 (splunkd + 0x2DCE16D) [0x000055B86F59B062] _ZN6Thread8callMainEPv + 178 (splunkd + 0x2DCF062)
Hi Splunk community, I need to display data shown as table below Component Total units Violated units Matched [%] Type A 1 1 99 Type B 10 10 75 Type C ... See more...
Hi Splunk community, I need to display data shown as table below Component Total units Violated units Matched [%] Type A 1 1 99 Type B 10 10 75 Type C 100 85 85 Total 111 96 86   In the total row, the matched value is the average of the column, while others are the sum value. Is it possible to insert the average value to the total row as shown? Here's my SPL:   index="my_index"source="*sourcename*" | stats count as total_units count(eval(isnull(approval_message))) as violated_units values(matched_percentage) as matched by component | addcoltotals total_units + violated_units labelfield=component | rename total_units as "Total Units", violated_units as "Violated Units", matched as "Matched [%]"      
I am seeing below One errors and One Info log on my Splunk Cloud.  I am not able to fetch data 1. socket error from 127.0.0.1:52108 while accessing /en-US/: Connection closed by peer 2. 12-20-202... See more...
I am seeing below One errors and One Info log on my Splunk Cloud.  I am not able to fetch data 1. socket error from 127.0.0.1:52108 while accessing /en-US/: Connection closed by peer 2. 12-20-2022 05:59:00.096 +0000 INFO ExecProcessor [24710 ExecProcessorSchedulerThread] - setting reschedule_ms=59904, for command=/opt/splunk/bin/python3.7 /opt/splunk/etc/apps/search/bin/quarantine_files.py     I am using trial version of splunk cloud. Any suggestion here
Hi Splunk Inc. Team, I'm experiencing issues with truncation across all sourcetypes  "OktaIM2:*" where in most cases only TRUNCATE=250000 can resolve. I also detected an issue with LINE_BREAKER r... See more...
Hi Splunk Inc. Team, I'm experiencing issues with truncation across all sourcetypes  "OktaIM2:*" where in most cases only TRUNCATE=250000 can resolve. I also detected an issue with LINE_BREAKER regex pattern for sourcetype=OktaIM2:group causing logs not to be ingested. ...current pattern defaults to: ([\r\n]+) and I had to modify to: (?<=\}\}\})(\, )   Can we please have these issues addressed and a new version cut for this Add-on in splunkbase? Thank you.
Alerts suddenly stopped in my local instance, i am getting the error like in the above image, can anyone please suggest the solution for this, i didn't change my email password, its same when it... See more...
Alerts suddenly stopped in my local instance, i am getting the error like in the above image, can anyone please suggest the solution for this, i didn't change my email password, its same when it is in working and now also. I configured my outlook mail with an app password. I recreate the app password and configure but i am still facing the issue.   Thankyou.
Hi , After onboarding trendmicro XDR we are facing few issue.  1. Getting logs in JSON format  2. Data is not pursed.    Queries 1.Can you please help us out how to convert the ... See more...
Hi , After onboarding trendmicro XDR we are facing few issue.  1. Getting logs in JSON format  2. Data is not pursed.    Queries 1.Can you please help us out how to convert the data from JSON format to raw logs  2. How to purse the data not getting any add on.   Note: attaching snap  We are getting data and in below there is an option as show as raw text when we are clicking on it is coming in same line. Kindly help us out how to solve this issue   Thanks Debjit
Hi All,   I have integrated Splunk HEC with springboot .when i hit application and checked in splunk am unable to see logs in splunk search with given index .am using source type as log4j2  Can... See more...
Hi All,   I have integrated Splunk HEC with springboot .when i hit application and checked in splunk am unable to see logs in splunk search with given index .am using source type as log4j2  Can any one help me .   Thanks in advance
Hi, I have 2 searches. 1st query: (100 results including duplicate number)     index="abc" message.appName=app1 "Description"="After some string*" | table _time Id number     2nd quer... See more...
Hi, I have 2 searches. 1st query: (100 results including duplicate number)     index="abc" message.appName=app1 "Description"="After some string*" | table _time Id number     2nd query:(80 results including duplicate d_number)     index="abc" message.appName=app2 "Description"="After some string2*" | table _time d_Id d_number      both d_number & number are matching How to get result-> only those number which are not matched with d_number I need only 100-80=20 number which may contain duplicate values from 1st query. (eg: query1-query2) Thank you in advance for your answer.
It seems that the KV Store is enabled by default on all servers.  On non-SHs, if we set [kvstore] disabled = true and upgrade from Splunk 8.1.x to Splunk 9.0.x Will the Storage Engine migrate fr... See more...
It seems that the KV Store is enabled by default on all servers.  On non-SHs, if we set [kvstore] disabled = true and upgrade from Splunk 8.1.x to Splunk 9.0.x Will the Storage Engine migrate from MMAPv1 to WiredTiger? Will the Server Version upgrade to 4.2.17? I know that if [kvstore] disabled = false, the upgrade should migrate to WiredTiger and server version 4.2.17.  I am just wondering if the migration and upgrade happen whether or not the kvstore is enabled. I may need to test this out in the lab   
Looking for help extracting Info between XML tags. This is generated from windows Print server event logs. the raw data looks like this:   <Event xmlns='http://schemas.microsoft.com/win/2004/08/eve... See more...
Looking for help extracting Info between XML tags. This is generated from windows Print server event logs. the raw data looks like this:   <Event xmlns='http://schemas.microsoft.com/win/2004/08/events/event'><System><Provider Name='Microsoft-Windows-PrintService' Guid='{747EF6FD-E535-4D16-B510-42C90F6873A1}'/><EventID>307</EventID><Version>0</Version><Level>4</Level><Task>26</Task><Opcode>11</Opcode><Keywords>0x4000000000000840</Keywords><TimeCreated SystemTime='2022-12-19T20:07:42.882389000Z'/><EventRecordID>Record_ID</EventRecordID><Correlation/><Execution ProcessID='procID' ThreadID='threadID'/><Channel>Microsoft-Windows-PrintService/Operational</Channel><Computer>User's computer</Computer><Security UserID='SID'/></System><UserData><DocumentPrinted xmlns='http://manifests.microsoft.com/win/2005/08/windows/printing/spooler/core/events'><Param1>216</Param1><Param2>Print Document</Param2><Param3>Username</Param3><Param4>Users_computer</Param4><Param5>Print Share Name</Param5><Param6>printer_name</Param6><Param7>1406204</Param7><Param8>1</Param8></DocumentPrinted></UserData></Event>   How do extract Peram3 and Peram5?  I have attempted using this regular expression, but it fails to find it   | regex _raw ="<Param3>.*?<\/Param3>"  
Hi everyone, I am comparatively new to Splunk and trying to create visualization of each http status code vs all traffic line graph that is traversing though the device. I am able to extract all stat... See more...
Hi everyone, I am comparatively new to Splunk and trying to create visualization of each http status code vs all traffic line graph that is traversing though the device. I am able to extract all status code due to a specific path and was able to each of the status code for a specified time as below: index=infra_device_sec sourcetype=device:cloudmonitor:json "message.reqPath"="/test/alpha/beta/delta" | stats count by message.status message.statuscount 0 30 200 3129 302 56321 403 10439 408 25   I am trying to create a graph for each status code vs all traffic as below:  index=infra_device_sec sourcetype=device:cloudmonitor:json "message.reqPath"="/test/alpha/beta/delta" | stats count by message.status | eval x=if('message.status'=503,"ServerDenied","All-Traffic") | timechart span=20m count by x useother=f<   But the output is showing only all traffic on a line graph. Could someone please guide two things: 1- How can create a line graph on each status code vs all traffic 2- How can I create a line graph which include all above status code vs all traffic.    Please let me know if any clarification is needed.    thank you 
please where can i get the updated sample data for practicing searches using SPL? thanks in advance
These are the Splunk query and it seems not working because i cant generate any request from that. Please I need any help to be able to generate a search. Any help would be appreciated, thanks ... See more...
These are the Splunk query and it seems not working because i cant generate any request from that. Please I need any help to be able to generate a search. Any help would be appreciated, thanks   index=aws-cloudtrail "userIdentity.type"=Root AND NOT eventType="AwsServiceEvent" |eval nullParentProcess=if(isnull("userIdentity.invokedBy"),true,false) |search nullParentProcess=true |convert ctime(_time) as _time |stats values(dest) values(eventType) values(eventName) values(userName) latest(_time) by src |rename values as * |head 51
Hi All, I have enquired this problem earlier in older threads, however, could not get a working answer, thus, created a new thread to get wider visibility and response.  Resources in hand: - I hav... See more...
Hi All, I have enquired this problem earlier in older threads, however, could not get a working answer, thus, created a new thread to get wider visibility and response.  Resources in hand: - I have a lookup table which has many fields. I have two fields to consider: - index, host I have a list of indexes for which results need to be fetched.  Requirement: - I need to fetch list of those hosts for each index value whose records are fetched in index but not fetched from lookup table. For fetching the events from index, I need to get the list of index values from lookup table. I tried the below however, I am getting hosts which are fetched in both index and lookup table: -   |tstats fillnull_value="unknown" count AS event_count WHERE [ |inputlookup table1 |stats count BY index |foreach index [eval <<FIELD>>=replace(replace(lower(trim(<<FIELD>>)),"\s+",""),"\t+","")] |eval search_str="(index=".index.")" |stats values(search_str) AS search_str |eval to_return=mvjoin(search_str," OR ") |return $to_return ] BY index, host |search NOT ( [ |inputlookup table1 |stats count BY index, host ] )   Thus, I need your help to resolve the issue. Thank you
An app was updated via the GUI on a SHC member. What exactly does the Splunkbase install do/check? What needs to be done to un-do any changes made? Is it just best to uninstall the app and redeploy f... See more...
An app was updated via the GUI on a SHC member. What exactly does the Splunkbase install do/check? What needs to be done to un-do any changes made? Is it just best to uninstall the app and redeploy from SHC deployer? Default/app.conf shows the old version number, which makes me think that the files weren't actually updated everywhere. Wondering the best route to proceed to fix the mistake. 
Hello All, Thanks for a great resource for Splunk and searches I am using the linux_secure sourcetype. I have a search that returns a value if a field (src) is longer than 1 if src is longer than... See more...
Hello All, Thanks for a great resource for Splunk and searches I am using the linux_secure sourcetype. I have a search that returns a value if a field (src) is longer than 1 if src is longer than 1 a user has logged in to a host from a "remote" host, aka a host without a splunk universal forwarder installed. When the user logs of the host with a forwarder, I want my base search to return 0 results, or make the table disappear (using Dashboard Studio). I detect the ssh_open or ssh_close in this search. Here is the search I am working with: sourcetype=linux_secure user=* | eval Date=strftime(_time, "%Y-%m-%d %H:%M:%S") | rex "(?P<Status>(?<=session)\s\w+)" | eval Action=case(Status=" opened","Online",Status=" closed","Off") | eval Action=if(len(src)>1,"Login from Remote",Action) | eval Action=if(len(src)=0,"Logged Off",Action)| sort - Date | table Date, host,src,Action My time range is 15 min. In a nutshell, I want "Remote" to show when src is there, and then zero results when the "Off" Action or the src length is 0, etc.   Any suggestions will help, Thank you very much, eholz1      
Hi Team,   I have the env setup like 2 Indexers, 1 Search Head,1 Heavy Forwarder,1 Deployment Server, 1 Cluster Master My DS is connected to HF and from here the data will be pushed to Indexers... See more...
Hi Team,   I have the env setup like 2 Indexers, 1 Search Head,1 Heavy Forwarder,1 Deployment Server, 1 Cluster Master My DS is connected to HF and from here the data will be pushed to Indexers  I would like use bots_v3_dataset for my env https://github.com/splunk/botsv3 Kindly help me how to push the data in distributed deployment.  
Hi,   If anyone can help me with this it would be truly helpful. I'm currently practicing to become a Splunk architect and I'm having an issue with file ownership on Linux Ubuntu. I changed Splunk'... See more...
Hi,   If anyone can help me with this it would be truly helpful. I'm currently practicing to become a Splunk architect and I'm having an issue with file ownership on Linux Ubuntu. I changed Splunk's ownership to the dedicated Splunk account I created but it only changes that specific folder and not all the contents in it can anyone give me a cool little command to make everything under the Splunk folder owned by the Splunk account I created for Splunk management/ Regards, 
I have a simple xml dashboard that I am doing some custom JavaScript with. I would like to get the earliest and latest from the time picker. However, if the time picker is set to today I am getting "... See more...
I have a simple xml dashboard that I am doing some custom JavaScript with. I would like to get the earliest and latest from the time picker. However, if the time picker is set to today I am getting "@d" for the earliest and "now" for the latest. Are there any helper functions to convert relative time to epoch? You can see this in my simplified code example below.     ... var defaultTokens = mvc.Components.get("default"); var earliest = defaultTokens.get('timePicker.earliest'); //when time picker is today this returns @d var latest = defaultTokens.get('timePicker.latest'); //when time picker is today this returns now ...