All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

How are the results different? What do you get? What were you expecting? Could it do with using backslashes? Can you get the results you were expecting by adding additional backslashes?
The following query return the expected result on Postman but return a different result on Javacsript fetch: search host="hydra-notifications-engine-prod*" index="federated:rh_jboss" "notifications-... See more...
The following query return the expected result on Postman but return a different result on Javacsript fetch: search host="hydra-notifications-engine-prod*" index="federated:rh_jboss" "notifications-engine ReportProcessor :" | eval chartingField=case(match(_raw,"Channel\s*EMAIL \|"),"Email",match(_raw,"Channel\s*GOOGLECHAT \|"),"Google Chat",match(_raw,"Channel\s*IRC \|"),"IRC",match(_raw,"Channel\s*SLACK \|"),"Slack",match(_raw,"Channel\s*SMS \|"),"SMS") |timechart span="1d" count by chartingField What is issue?
If you have a timechart split by a field, then it's different to stats, because your field name is not called total. You need to use this type of constrct | foreach * [ | eval <<FIELD>>=round('<<FI... See more...
If you have a timechart split by a field, then it's different to stats, because your field name is not called total. You need to use this type of constrct | foreach * [ | eval <<FIELD>>=round('<<FIELD>>'/7.0*.5, 2) ] Here's an example you can run that generates some random data | makeresults count=1000 | eval p=random() % 5 + 1 | eval player="Player ".p | streamstats c | eval _time=now() - (c / 5) * 3600 | timechart span=1d count by player | foreach * [ | eval <<FIELD>>=round('<<FIELD>>'/7.0*.5, 2) ] However, it's still not entirely clear what you are trying to do. You talk about a week of 700 but are timecharting by 1 day and you say if Lebron has 100 one week - what are you trying to get with the values by day? Are you trying to normalise all players so they can be seen relative to each other or something else? Perhaps you can flesh out what you are trying to achieve if you think of your data as a timechart.
True, but I didn't want to give away all my secrets! 
@pjac1029  You're most welcome! I'm glad to hear that it worked for you.
Hi @livehybrid , Yes, I do have the appIcon.png in the folder $SPLUNK_HOME/etc/apps/search/appserver/static/, but the error still appears. I’m also facing the same issue in my custom Splunk app loc... See more...
Hi @livehybrid , Yes, I do have the appIcon.png in the folder $SPLUNK_HOME/etc/apps/search/appserver/static/, but the error still appears. I’m also facing the same issue in my custom Splunk app located at $SPLUNK_HOME/etc/apps/Custom_app/appserver/static/. I tried adding the appIcon.png (36x36) there as well, restarted Splunk, and checked my custom app,(Also in all splunk apps) but the appIcon error still persists — even in the dashboards of core Splunk app
Hi @livehybrid  The screenshot I sent is from the Search Head and shows the exact same configuration deployed to the Heavy Forwarder. This is the first Heavy Forwarder that the data lands on. The da... See more...
Hi @livehybrid  The screenshot I sent is from the Search Head and shows the exact same configuration deployed to the Heavy Forwarder. This is the first Heavy Forwarder that the data lands on. The data is sent to the Heavy Forwarder using rsyslog, and the Heavy Forwarder uses [monitor:] to monitor the logs.
Hi @livehybrid    I checked with the query and it worked, I made little change to display DD:MM:YYYY HH:MM:SS using below query and it worked as expected. I am marking your answer as solution since... See more...
Hi @livehybrid    I checked with the query and it worked, I made little change to display DD:MM:YYYY HH:MM:SS using below query and it worked as expected. I am marking your answer as solution since it gave me base query to develop from, thank you very much !   |eval timestamp=strftime(now(), "%d/&m/%Y %H:%M:%S") |table timestamp,<<intented fields>>
@hrawat  The email sent titled "Splunk Service Bulletin Notification" was very poorly written. It explicitly states to upgrade to one of the following versions, it doesn't say "or later". We have r... See more...
@hrawat  The email sent titled "Splunk Service Bulletin Notification" was very poorly written. It explicitly states to upgrade to one of the following versions, it doesn't say "or later". We have recently upgraded all our forwarders to be running 9.4.1, which according to the service bulletin email isn't fixed, only 9.4.0 is (was there regression, or is the email wrong?).  
Hi @ranafge  Okay, this is progress in terms of diagnosing. So - you see events if you search index="wazuh-alerts"  ? If you search index="wazuh-alerts"  "Medium" - do you get any result then? Im ... See more...
Hi @ranafge  Okay, this is progress in terms of diagnosing. So - you see events if you search index="wazuh-alerts"  ? If you search index="wazuh-alerts"  "Medium" - do you get any result then? Im trying to determine if its a field extraction issue or if the data is actually missing.   Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @Alan_Chan  Ive checked this config locally and it does work for your sample event so something else isnt right here. I think there is a typo in what you posted so I used the value from the scree... See more...
Hi @Alan_Chan  Ive checked this config locally and it does work for your sample event so something else isnt right here. I think there is a typo in what you posted so I used the value from the screenshot, but please confirm you have the asterisk in your SEDCMD that is deployed? Is the screenshot you sent from the Searchhead? Is the exact same config deployed to the Heavy Forwarder? And is this the only (or first) HF that the data lands on?  How is the data arriving into the HF? If it is via HEC using the event endpoint then this configuration will not work and you would need to use INGEST_EVAL or move to the raw HEC endpoint.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @pjac1029  Simply add "<prefix>production</prefix>" within your <input></input> block like this:    Did this answer help you? If so, please consider: Adding karma to show it was use... See more...
Hi @pjac1029  Simply add "<prefix>production</prefix>" within your <input></input> block like this:    Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
that worked ! Thanks so much for your help. I really appreciate it !
If I were to be nitpicky I'd say that it captures stuff like 000.999.123.987, which is not a valid IP
Actually, I believe the docs are correct since BREAK_ONLY_BEFORE applies to the line-merging stage which - if enabled - happens after line breaking. Anyway, @jackin  unless you have a very, very pec... See more...
Actually, I believe the docs are correct since BREAK_ONLY_BEFORE applies to the line-merging stage which - if enabled - happens after line breaking. Anyway, @jackin  unless you have a very, very peculiar use case, as a rule of thumb you should never enable line-merging. It is resource-intensive and most often you can achieve the same result by simply chosing a proper line breaker. So, how I would approach this - I'd firstly try to use the default ([\r\n]+) linebreaker and check if the stream gets broken into separate lines (disable SHOULD_LINEMERGE!). If it does, you can start searching how to anchor the breaker to the opening bracket. If it doesn't, that means you have some other characters in your data stream and you have to check what it is.
@pjac1029  I reviewed the XML and implemented the solution using a token prefix strategy combined with test data via makeresults (since I currently don't have real firewall data). I replaced the <d... See more...
@pjac1029  I reviewed the XML and implemented the solution using a token prefix strategy combined with test data via makeresults (since I currently don't have real firewall data). I replaced the <done> block with a <change> block in the dropdown input to ensure that the prefix logic (e.g., "prod\") applies every time the user changes the selection. This resolved the issue where the token was only set once. I also validated the dropdown behavior and confirmed that event filtering works as expected based on the selected username. <dashboard version="1.1"> <label>Firewall Blocks Dashboard (Test Data)</label> <fieldset submitButton="false" autoRun="true"> <input type="dropdown" token="raw_username" searchWhenChanged="true"> <label>Select Username</label> <fieldForLabel>username</fieldForLabel> <fieldForValue>username</fieldForValue> <search> <query>| inputlookup test_users.csv | table username</query> </search> <change> <set token="username">prod\\$value$</set> </change> </input> </fieldset> <row> <panel> <table> <search> <query> | makeresults | eval user="prod\\john.doe", action="blocked" | append [| makeresults | eval user="prod\\jane.smith", action="allowed"] | append [| makeresults | eval user="prod\\bob.jones", action="blocked"] | search user="$username$" | table user action </query> <earliest>-7d@h</earliest> <latest>now</latest> </search> </table> </panel> </row> </dashboard>        
I created a  dashboard with an input  that allows the user to select a user field from a dropdown that's populated by a lookup table.   I want to prefix the selected user with "production\" and r... See more...
I created a  dashboard with an input  that allows the user to select a user field from a dropdown that's populated by a lookup table.   I want to prefix the selected user with "production\" and run a query In a panel that retrieves firewall events  where the user = the new token value (prefixed with "production\") since the user in the firewall index is prefixed with "production". The first time I select the user from the lookup the query retrieves  events. the next time I select another user the set token does not prefix the token with "production". instead it searches with the user selected value and returns no events. the done block apparently only executes the first time through below is xml. Thanks in advance. <label>firewall blocks</label> <fieldset submitButton="false" autoRun="true"> <input type="dropdown" token="username" searchWhenChanged="true"> <label>username</label> <fieldForLabel>username</fieldForLabel> <fieldForValue>username</fieldForValue> <search> <query>| inputlookup test_users.csv | table username</query> <earliest>-24h@h</earliest> <latest>now</latest> <done> <set token="username">prod\\$username$</set> </done> </search> </input> </fieldset> <row> <panel> <table> <search> <query> index=firewall sourcetype=firewall user = "$username$" | table $username$ user action </query> <earliest>-7d@h</earliest> <latest>now</latest> </search> <option name="drilldown">none</option> </table> </panel>
I contend the documentation is incorrect.  LINE_BREAKER and BREAK_ONLY_BEFORE are contradictory and shouldn't be used together.  At the very least, great care should be used to ensure the two setting... See more...
I contend the documentation is incorrect.  LINE_BREAKER and BREAK_ONLY_BEFORE are contradictory and shouldn't be used together.  At the very least, great care should be used to ensure the two settings work properly.
First of all thanks for your reply. I ran the following search in Splunk: index="wazuh-alerts" "data.vulnerability.severity"="Medium" | stats count I also tested for other severity levels like "Hi... See more...
First of all thanks for your reply. I ran the following search in Splunk: index="wazuh-alerts" "data.vulnerability.severity"="Medium" | stats count I also tested for other severity levels like "High" and "Low," but the result was always 0. This indicates that no vulnerability detection events are being indexed in Splunk. Even though other types of data are coming through, there are currently no events where data.vulnerability.severity is populated with "High," "Medium," or "Low." It suggests that either: Vulnerability Detector is not generating results, The events are not being forwarded to Splunk properly, Or the events are being indexed but under a different sourcetype, index, or field structure. Would appreciate any guidance on how to dig deeper into this!
Hi @Alan_Chan , this transformation is unuseful on the SH, but it must be localized in the first HF that dara pass trhough, are you sure that you applied it in the first HF? Then check if the sourc... See more...
Hi @Alan_Chan , this transformation is unuseful on the SH, but it must be localized in the first HF that dara pass trhough, are you sure that you applied it in the first HF? Then check if the sourcetype where you associated the SEDCMD command is the correct one, and that there isn't any transformation on this sourcetype. Then, are you sure that is useful to remove these few chars? Ciao. Giuseppe