All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

How can I send email alert for data from multiple panels present in a dashboard as a single csv file having separate tabs for each panel
h/t Nick I have iterated on your idea. It stripped the decimals nicely but kept the dot when "6556.000" so I added \d. | rex field=alert_value "^(?<myfield>[\s\S]*\.\d[\s\S]*?)0*$" In my case,... See more...
h/t Nick I have iterated on your idea. It stripped the decimals nicely but kept the dot when "6556.000" so I added \d. | rex field=alert_value "^(?<myfield>[\s\S]*\.\d[\s\S]*?)0*$" In my case, my field also contains integers: | rex field=alert_value "^(?<keep>[^\.]+)(?<keepdot>\.{0,1})(?<keepdotdecimal>\d*?)0*$" | eval human_value = keep . if(len(keepdotdecimal)!=0, "." . keepdotdecimal, "") It caters for "6556" and "6,556"
As the title suggests, i am trying to onboard multiple data sources in Splunk UBA. I would like to see if there is a way i can see from the CLI on the EPS per each data source ingested. The EPS  f... See more...
As the title suggests, i am trying to onboard multiple data sources in Splunk UBA. I would like to see if there is a way i can see from the CLI on the EPS per each data source ingested. The EPS  fluctuates when we run data sources to ingest data from Specific time window even if the data exists within Splunk Enterprise as compared to AllTime. 
Hi Team I am new to Splunk and looking for a way to Fetch few metrics data from Splunk using Splunk REST API. Can you please help me with right approach to implement the same? I have explored ... See more...
Hi Team I am new to Splunk and looking for a way to Fetch few metrics data from Splunk using Splunk REST API. Can you please help me with right approach to implement the same? I have explored 2 ways so far: 1. Using Search Job endpoints (Trying out this way is in-progress) 2. Using search queries within scripts If you can help with pro's n cons of above methods, also if any other way which will be appropriate one, it would be helpful. I am looking for best way to develop these APIs, so can get the result stored into files\database. Any inputs would be really appreciated. Thanks in advance! Akshada  
If  you are using /services/collector/event end point to send events, try changing it to /services/collector/raw. Test using below commands   curl -vvvv -k https://hecendpoint:8088/services/collec... See more...
If  you are using /services/collector/event end point to send events, try changing it to /services/collector/raw. Test using below commands   curl -vvvv -k https://hecendpoint:8088/services/collector/raw -H "Authorization: Splunk XXXXXXXXXXXX" -d '{"id": "raw-message", "createdDateTime": "2023-09-23T11:11:11Z"}' curl -vvvv -k https://splunkviphec.bbl.int:8088/services/collector/event -H "Authorization: Splunk XXXXXXXXXXXXXXX" -d '{"event":{"id": "event-message", "createdDateTime": "2023-09-23T11:11:11Z"}}'
I have the same issue, how do you all solve it guys?
Hello i already installed UF in Windows Server 2016 but I get the error in Splunkd 09-22-2023 10:19:01.204 +0700 ERROR ExecProcessor [4720 ExecProcessor] - message from ""C:\Program Files\SplunkUniv... See more...
Hello i already installed UF in Windows Server 2016 but I get the error in Splunkd 09-22-2023 10:19:01.204 +0700 ERROR ExecProcessor [4720 ExecProcessor] - message from ""C:\Program Files\SplunkUniversalForwarder\bin\splunk-winevtlog.exe"" splunk-winevtlog - WinEventLogChannel::init: Failed to bind to DC, dc_bind_time=0 msec 09-22-2023 10:19:01.204 +0700 ERROR ExecProcessor [4720 ExecProcessor] - message from ""C:\Program Files\SplunkUniversalForwarder\bin\splunk-winevtlog.exe"" splunk-winevtlog - WinEventLogChannel::subscribeToEvtChannel: Could not subscribe to Windows Event Log channel 'Microsoft-Windows-Sysmon/Operational' 09-22-2023 10:19:01.204 +0700 ERROR ExecProcessor [4720 ExecProcessor] - message from ""C:\Program Files\SplunkUniversalForwarder\bin\splunk-winevtlog.exe"" splunk-winevtlog - WinEventLogChannel::init: Init failed, unable to subscribe to Windows Event Log channel 'Microsoft-Windows-Sysmon/Operational': errorCode=5 can anyone help me regarding this issue?  
hi @corti77  I'm getting the same error, but my Splunk UF is running as Administrator But I still get the same error. I wonder if there is any other way to fix this error ? Thanks
The eval statement is wrong - use sum and an if condition to evaluate to 1 or 0 | stats sum(eval(if(action="Not Found" OR action="Forbidden",1,0))) as failures by src  
Are you just making up numbers in the example?  You have four numbers in first table, but 16 in the second.  There is no logic between the two.
When you say per day and per week do you mean you want unique user count in a week as long as a person logged in once in that week, or do you want to show a daily unique user count AND a weekly uniqu... See more...
When you say per day and per week do you mean you want unique user count in a week as long as a person logged in once in that week, or do you want to show a daily unique user count AND a weekly unique user count? Building on @yuanliu comments, to get count of users per day then it's <some other selectors> event=login status=success earliest=-1mon | timechart span=1w@w dc(user) as users will give you a weekly unique count from Sun->Sat If you want to get unique by day as well as by week, then do the daily dc() count and save the values and then after bin by week and add in the dc() count for the week | timechart span=1d dc(user) as users values(user) as tmp_users | eval t=_time | bin t span=1w@w | eventstats dc(tmp_users) as weekly_users by t | fields - tmp_users t  
Hi,  We have a requirement to migrate ITSI Content packs to Splunk Cloud. Is it possible to achieve this? If yes, Could you please help with the list of steps to perform for this? I would also wan... See more...
Hi,  We have a requirement to migrate ITSI Content packs to Splunk Cloud. Is it possible to achieve this? If yes, Could you please help with the list of steps to perform for this? I would also want to know what are the risks involved.
The argument '(eval(action IN (Not Found,Forbidden)))' is invalid Did you use quotation marks as my example includes?
Some Search Heads show "Up" status but other search heads are always show "Pending".  I want to know that how to solve the issue?
If your raw data has data like blablabla...EventCode=528,blablabla then you can use  index=Win_Logs TERM(EventCode=528) OR TERM(EventCode=540) OR TERM(EventCode=462... See more...
If your raw data has data like blablabla...EventCode=528,blablabla then you can use  index=Win_Logs TERM(EventCode=528) OR TERM(EventCode=540) OR TERM(EventCode=4624) AND user IN (C*,W*,X*) | timechart span=1w dc(user) as Users You probably don't need the dedup - it's unnecessary as the dc() is doing that anyway. Also if the raw data has user=BLA... then you could also do TERM(user=C*) .. Note that for term searches, the raw data MUST have those terms. If you look at the lispy in the search log, you will see different lispy for the TERM() variants and the non TERM variants.
The eval statement is wrong, technically it would be | eval is_historical=if(in(ipAddress, [ search index=<index> operationName="Sign-in activity" earliest=-7d@d | stats values(ipAddress) as ... See more...
The eval statement is wrong, technically it would be | eval is_historical=if(in(ipAddress, [ search index=<index> operationName="Sign-in activity" earliest=-7d@d | stats values(ipAddress) as ipAddress | eval ipAddress="\"".mvjoin(ipAddress, "\",\"")."\"" | return $ipAddress ] ), "true", "false" ) but this is probably the wrong way to go about this, because you are always doing 2 searches, when you only need one.  You should do a single search, for example like this index=<index> operationName="Sign-in activity" earliest=-7d@d | bin _time span=1d ``` Count by day/ip ``` | stats count by _time ipAddress ``` Count unique days and most recent day by IP ``` | stats dc(_time) as countDays max(_time) as latestDay by ipAddress ``` Now calculate historical indicator ``` | eval is_historical=if(countDays>1 AND latestDay>=relative_time(now(), "@d"), "true", "false" )  
Can you share what is working for the search and a further example of what is not. If tokens/dropdown is not working, it will be related to the data - your original example of token usage would not w... See more...
Can you share what is working for the search and a further example of what is not. If tokens/dropdown is not working, it will be related to the data - your original example of token usage would not work, so that has to change.  
Hello,  I am trying to implement a behavioral rule, that checks if an ip was used in the last 7 days or not. this is who my search looks like :  index=<index> operationName="Sign-in activity" | s... See more...
Hello,  I am trying to implement a behavioral rule, that checks if an ip was used in the last 7 days or not. this is who my search looks like :  index=<index> operationName="Sign-in activity" | stats count by ipAddress | eval is_historical=if(ipAddress IN [ search index=<index>operationName="Sign-in activity" earliest=-7d@d | dedup ipAddress | table ipAddress], "true", "false" ) i got a wrong results and it seems that only the first search was executed, and the eval was failed.   Any Help please ? Regards
No, I never did but it did stop happening so I have no idea what caused it.
You have to tell volunteers how your data looks like.  Forget Splunk.  How do you tell there is a new login, how do you tell a new login is successful from your data? Suppose your data have three fi... See more...
You have to tell volunteers how your data looks like.  Forget Splunk.  How do you tell there is a new login, how do you tell a new login is successful from your data? Suppose your data have three fields, user, event, and status, where event "login" signifies a new login, and status "success" signifies success.  A generic way to do this would be <some other selectors> event=login status=success earliest=-1mon | timechart span=1d count by user