All Topics

Top

All Topics

Hello, I'm trying to create an alert in DEV Environment to include "DEV" with subject something like  Splunk Alert:  DEV - MyAlert I can't hardcore this since we deploy the same alert to PROD ... See more...
Hello, I'm trying to create an alert in DEV Environment to include "DEV" with subject something like  Splunk Alert:  DEV - MyAlert I can't hardcore this since we deploy the same alert to PROD through GIT and we can't make corrections to the code.  So I'm looking something (Splunk Alert:  $env$- $name$) if there is way to implement this.  My splunk cloud urls DEV : xydev.splunkcloud.com PROD : xyprod.splunkcloud.com
I have installed the Onelogin TA and there is a sourcetype parser from that TA that has taken over everything and it is jacking the logs up (onelogin:user). Anybody know why this is happening, and ho... See more...
I have installed the Onelogin TA and there is a sourcetype parser from that TA that has taken over everything and it is jacking the logs up (onelogin:user). Anybody know why this is happening, and how I can prevent this? 
Hi all, I'm getting this error periodically with my local Splunk Enterprise installation in Mac OS. I've resorted to just reinstalling when this happened in the past but I'd like to avoid that and u... See more...
Hi all, I'm getting this error periodically with my local Splunk Enterprise installation in Mac OS. I've resorted to just reinstalling when this happened in the past but I'd like to avoid that and understand the cause / fix.  Splunk was running but seemed to hand when I tried to restart from the webUI. After that I get this error when trying to start. If I try to stop via CLI I it says splunkd is not running. Help is very much appreciated as this is getting to be a real pain. 
Can an event be searched using the transaction without any index or source values? Yes or No breif answer on selection
I keep getting an error message when I am attempting to this command  * EventCode=* user=* WinEventLog:Application | eval src_nt_host=coalesce(src_nt_host,host) | eval lockout=if(EventCode==644 OR E... See more...
I keep getting an error message when I am attempting to this command  * EventCode=* user=* WinEventLog:Application | eval src_nt_host=coalesce(src_nt_host,host) | eval lockout=if(EventCode==644 OR EventCode==4740 OR EventCode==4624,"Yes","No") | stats latest(_time) as time, latest(src_nt_host) as host, latest(lockout) as lockedout values(dest_nt_domain) as dest_nt_domain count(eval(EventCode=4625 OR EventCode=4771)) as count values(Source_Network_Address) as Source_Network_Address by user | eval time=strftime(time,"%c") | rename user to "User Name", Source_Network_Address to "IP Address", count to "Number of Failures" | table dest_nt_domain "User Name" host lockedout time "IP Address" "Number of Failures" I need to pull the application that are running in the event viewer. I was able to pull them in a different location, but I want it to say more information about with the user information.
How are you doing procedures for Notable Events? The description field doesn't support paragraph breaks. I'd been using Next Steps as my space for procedures. With the upgrade to 7.3.0, my Next Ste... See more...
How are you doing procedures for Notable Events? The description field doesn't support paragraph breaks. I'd been using Next Steps as my space for procedures. With the upgrade to 7.3.0, my Next Steps all have  {"version":1,"data":" appended at the start. If I try to update them, it appears Splunk upgrades the text to the new version and linebreaks are no longer supported and my procedures turn into giant blobs of text.
Hi Splunkers, I have to calculate daily ingested volume in a Splunk Enteprise environment. Here on community I found a lot of post, and related answer, to a similar question: daily license consumpti... See more...
Hi Splunkers, I have to calculate daily ingested volume in a Splunk Enteprise environment. Here on community I found a lot of post, and related answer, to a similar question: daily license consumption, but I don't know if it is what I need. I mean: we know that, once data are ingested by Splunk, compression factor is applied and, in a non clustered environment, it is more or less 50%. So, for example, if I have 100 GB data ingested by day, final size on disk will be 50 GB . Well, I have to calculate total GB BEFORE compression is applied. So, in my above example, search/method I need should NOT return 50 GB as final result, but 100 GB. Moreover, in my current env, I have an Indexers cluster.  So, what is not clear is: daily consumed License, is what I need? I mean: when I see daily consumed license by my environment, GB returned are the ingested one BEFORE compression, or the Compressed one?  
I am trying to create a Transaction where my starting and ending 'event' have exactly the same time. In _raw the time is "Wed Feb 21 08:15:01 CST 2024" My current SPL is:  | transaction keeporphans... See more...
I am trying to create a Transaction where my starting and ending 'event' have exactly the same time. In _raw the time is "Wed Feb 21 08:15:01 CST 2024" My current SPL is:  | transaction keeporphans=true host aJobName startswith=("START of script") endswith=("COMPLETED OK" OR "ABORTED, exiting with status") But my transaction only has the starting event. So I added the following which had no change ? | eval _time = case( match(_raw, "COMPLETED OK"), _time +5, match(_raw, "ABORTED"), _time +5, true(),_time) | sort _time | transaction keeporphans=true host aJobName startswith=("START of script") endswith=("COMPLETED OK" OR "ABORTED, exiting with status") When I added the above changes, when I look the the events in the 'Time' columns they are 5 seconds apart, yet Tranaction does not associate them ? 2/21/24 8:15:01.000 AM (Starting Event) 2/21/24 8:15:06.000 AM (Ending Event)
Anyone any experience with automated testing of Splunk dashboards.  I'm looking for something to test whether all drilldowns and dropdowns work and preferably data check if numbers add up.
I just realized that the NIX TA is being deployed to our forwarders via the deployment apps, to the indexers via the master apps and to the SHs via the SH apps. It was a surprise for me to realize th... See more...
I just realized that the NIX TA is being deployed to our forwarders via the deployment apps, to the indexers via the master apps and to the SHs via the SH apps. It was a surprise for me to realize that the TA is not being deployed to the deployment and deployer servers, the license master and the cluster master. And therefore, how can the TA be deployed to all Splunk servers?
Splunk Add-on for MYSQL Database: What role/permissions are required from MYSQL dba to use this add-on? What role should be assigned to the user created on MYSQL server to communicate with splunk db... See more...
Splunk Add-on for MYSQL Database: What role/permissions are required from MYSQL dba to use this add-on? What role should be assigned to the user created on MYSQL server to communicate with splunk db connect.
I'm trying to run a base search but it is throwing me an error. Reason being I have two search tags inside a panel.  EG: Base search: <search id="basesearch"> <query>index=main source=xyz </que... See more...
I'm trying to run a base search but it is throwing me an error. Reason being I have two search tags inside a panel.  EG: Base search: <search id="basesearch"> <query>index=main source=xyz </query> <earliest>$EarliestTime$</earliest> <latest>$LatestsTime$</latest> </search> Panel search: <chart depends="$abc$"> <title>Chart1</title> <search> <done> <eval abc="computer1"</eval> </done> <search base="basesearch"> <query> |search host="INFO" OR host="ERROR" panel=$panel1$ |timechart span=$TimeSpan$m count by panel usenull=f useother=f | eventstats sum("host") as _host</query> </search> <earliest>$InputTimeRange.earliest$</earliest> <latest>$InputTimeRange.latest$</latest> </search> <option name="charting.axisTitleY.visibility">collapsed</option> <option name="charting.chart">column</option> <option name="charting.drilldown">all</option> <option name="charting.fieldColors">{"host":0xFFFF00}</option> <option name="charting.legend.placement">bottom</option> <option name="refresh.display">progressbar</option> </chart> Warning msg : Node <search> is not allowed here Done section is required in the panel so I cannot remove it.  Is there a way to use a base search this way?  
Hello everyone, I was wondering if there are any ways to back up / version control Dashboards that were created directly on Splunk Cloud to either a local git or to at least make sure that they ca... See more...
Hello everyone, I was wondering if there are any ways to back up / version control Dashboards that were created directly on Splunk Cloud to either a local git or to at least make sure that they can be recoverable/rolled back if a user/administrator edits/deletes one. So far I found this App: #https://splunkbase.splunk.com/app/5061, but I think that this app is more like a dashboard and doesn't really provide any of the use cases that I just described. BR, Andreas
Hi, Could anyone pls guide me how we can detect an attacker moving laterally in the environment can be a challenge right, How we can write the correlation search is there any prerequisites  need to ... See more...
Hi, Could anyone pls guide me how we can detect an attacker moving laterally in the environment can be a challenge right, How we can write the correlation search is there any prerequisites  need to be followed. Thanks in advance
hi I have this situation index="idx" [| inputlookup name.csv | table id name ] idx= id name 1a2 aaa 1A2 aaa 12a bbb   lookup id name 1a2 aaa   the result is t... See more...
hi I have this situation index="idx" [| inputlookup name.csv | table id name ] idx= id name 1a2 aaa 1A2 aaa 12a bbb   lookup id name 1a2 aaa   the result is that it extracts the first 2 lines. How do I extract just the first line? Thank you Simone
Can some one please help with the regex that can be used to view the below event in tabular format. Event INFO > 2024-02-02 16:12:12,222 - [application logs message]: =============================... See more...
Can some one please help with the regex that can be used to view the below event in tabular format. Event INFO > 2024-02-02 16:12:12,222 - [application logs message]: ============================================== Part 1.    session start is completed Part 2.     Before app message row count    : 9000000                   Before app consolidation row count    :8888800 Part 3.     append message completed Part 4.     After app message flush row count : 0                    After app message flush row count     :1000000 =================================================   How can we use regex and get the fields from above event and show them in table like below parts                   message                                                  count Part 1               session start is completed part 2                 Before app message row count          9000000                                                                                                         8888800 part 3                   append message completed
Hi, My requirement is to find 30 mins result using timechart span=30m from the start time that I have mentioned. Start time can be e.g say 11:34 AM OR 11:38 AM OR 11:42 AM etc. But instead getting... See more...
Hi, My requirement is to find 30 mins result using timechart span=30m from the start time that I have mentioned. Start time can be e.g say 11:34 AM OR 11:38 AM OR 11:42 AM etc. But instead getting result in 30 mins internal say 11:30 AM, 12 PM, 12:30 PM, 1 PM etc.  
Hello Community, I am encountering the issue of logs not being received on two regions but successfully been received for another region. Upon further investigating, we didn't observe any error in ... See more...
Hello Community, I am encountering the issue of logs not being received on two regions but successfully been received for another region. Upon further investigating, we didn't observe any error in splunkd.log, inputs & outputs config are in place, there are no space issues as well. What could be the possible reason for this if anyone can help me? All our indexers & SH's are hosted in Splunk cloud.
I'm trying to build an alert that looks at the number of logs from the past three days and then compares it to the number of logs from the previous three days before that. I want to set off an alert ... See more...
I'm trying to build an alert that looks at the number of logs from the past three days and then compares it to the number of logs from the previous three days before that. I want to set off an alert if the log numbers drop by 30% or more in any period. I've seen this done with search and with _index, but I'm unsure which way is best. I don't want to build almost 100 searches for 100 different source types, and I'd much rather do it by the twenty-something indexes. I'm not sure if ML is the right way to do this but I've seen times when logs stop flowing and it isn't noticed for days and I want to prevent that from happening. Any help is appreciated.