All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @ITWhisperer , Thanks for reply and catching the issue with the date_hour operator. You're absolutely right as I do want to include 7AM. I tried using timechart instead of stats in the query... See more...
Hi @ITWhisperer , Thanks for reply and catching the issue with the date_hour operator. You're absolutely right as I do want to include 7AM. I tried using timechart instead of stats in the query and selected a timeframe during which I know for a fact that there are no occurrences of the event containing the signature. I was expecting to see a count of 0 but instead I got no results: I tried adding fillnull to the bottom of the query as I thought that, maybe, the count was returning null instead of "0" to no avail.
Hi team, We have installed new Splunk UF in one of the file server  we have configured all the config files correctly. But data not getting forwarded from file server to Splunk.Its showing below e... See more...
Hi team, We have installed new Splunk UF in one of the file server  we have configured all the config files correctly. But data not getting forwarded from file server to Splunk.Its showing below error message if i'm searching with index name there is no data showing in search head.Please could you help on this ??      
Hi, We are interested in installing a Geo Server as mentioned here: Host a Geo Server (appdynamics.com) However, we cannot find the GeoServer.zip mentioned in the downloads. Where can we find this ... See more...
Hi, We are interested in installing a Geo Server as mentioned here: Host a Geo Server (appdynamics.com) However, we cannot find the GeoServer.zip mentioned in the downloads. Where can we find this .zip file? Thanks, Roberto
The timechart command will fill in the missing timeslots for you | where (date_hour >= 7 AND date_hour < 19) | timechart span=5m count  (btw, you probably want >= 7 if you want from 7am) 
When you use a field name with embedded spaces on the right hand side of an assignment, it should be in single quotes (not double quotes)   eval "Start Time" = strftime(strptime('Start Time', "%FT%... See more...
When you use a field name with embedded spaces on the right hand side of an assignment, it should be in single quotes (not double quotes)   eval "Start Time" = strftime(strptime('Start Time', "%FT%T.%Q%Z"), "%F %T") | reverse  
Hi Splunk Community, I need to build an alert that will be triggered if a specific signature is not present in the logs for a period of time. The message shows up in the logs every 3 or 4 secon... See more...
Hi Splunk Community, I need to build an alert that will be triggered if a specific signature is not present in the logs for a period of time. The message shows up in the logs every 3 or 4 seconds in BAU conditions, but there are some instances of longer intervals going up to 4 minutes. What I had in mind was a query that ran over a 15-time timeframe using 5-minute buckets - to ensure that I would catch the negative trend and not only the one offs. I have made it this far in the query:   index=its-em-pbus3-app "Checking the receive queue for a message of size" | bin _time span=5m aligntime=@m | eval day_of_week = strftime(_time,"%A") | where NOT (day_of_week="Saturday" OR day_of_week="Sunday") | eval date_hour = strftime(_time, "%H") | where (date_hour > 7 AND date_hour < 19) | stats count by _time   **I only need the results for Monday to Friday between the hours of 7AM and 7PM. The query returns the count by _time, which is great, but if the signature is not present I don't get any hits, obviously. So I can count the number of occurrences within the 5-minute buckets, but I can't assess the intervals or determine the absence using count. I thought of, perhaps, manipulating timestamps so I could calculate the difference between current time and the last timestamp of the event, but I am not exactly sure how to compare a timestamp to "now". I would appreciate if I could get some advice on either how to count "nulls" or how to cross-reference the timestamps of the signature against current time. Thank you all in advance.
Hi @ravida, I never experienced this behavior and I'm using many custom Correlation Searches, Is this issue present for all the CS or only for that? Ciao. Giuseppe  
Hi @Chirag812 , I cannot test it, but it should work: index=abc sourcetype=abc | bin span=1m _time | stats count AS TimeTaken BY IP _time | timechart perc95(TimeTaken) AS Perc_95 by IP Ciao. Gius... See more...
Hi @Chirag812 , I cannot test it, but it should work: index=abc sourcetype=abc | bin span=1m _time | stats count AS TimeTaken BY IP _time | timechart perc95(TimeTaken) AS Perc_95 by IP Ciao. Giuseppe
Yes same from 10 may push notification not working , also no config edited , all alerts are there on splunk mobile when i open it and emails are being sent but iam not getting any notifications on an... See more...
Yes same from 10 may push notification not working , also no config edited , all alerts are there on splunk mobile when i open it and emails are being sent but iam not getting any notifications on any of my team phones..  
Hi,  I am having some trouble understanding the right configuration for collecting the Logs from the Event Hub of the App "Microsoft Cloud Services".  From the documentation: Configure Event Hubs... See more...
Hi,  I am having some trouble understanding the right configuration for collecting the Logs from the Event Hub of the App "Microsoft Cloud Services".  From the documentation: Configure Event Hubs  it is not clear how to set these three parameters for a Log Source that collect A LOT of logs every minute.  interval -->  The number of seconds to wait before the Splunk platform runs the command again. The default is 3600 seconds. There is a way in the _internal logs to check when the command is executed?  max_batch_size --> The maximum number of events to retrieve in one batch. The default is 300. This is pretty clear, but can we increase this value as much as we want? I believe we encounter some performance issue on that.  max_wait_time -->  The maximum interval in seconds that the event processor will wait before processing. The default is 300 seconds. Processing what? Waiting for what? Anyone know a configuration of values between these three fields that could optimize an Event Hub with thousands and thousands of Logs ??
not able to search with any attribute which are having .(dot) like env.cookieSize NOT WORKING ------------------   index="ss-prd-dkp" "*price?sailingId=IC20240810&currencyIso=USD&categor... See more...
not able to search with any attribute which are having .(dot) like env.cookieSize NOT WORKING ------------------   index="ss-prd-dkp" "*price?sailingId=IC20240810&currencyIso=USD&categoryId=pt_internet" | spath status | search status=500 | spath "context.duration" | search "context.duration"="428.70000000006985"| spath "context.env.cookiesSize" | search "context.env.cookiesSize"=7670   WORKING   index="ss-prd-dkp" "*price?sailingId=IC20240810&currencyIso=USD&categoryId=pt_internet" | spath status | search status=500 | spath "context.duration" | search "context.duration"="428.70000000006985"   Let me know the solution for this  context: { [-] duration: 428.70000000006985 env.automation-bot: false env.cookiesSize: 7670 env.laneColor: blue }
Sorry for bumping this thread, I got this working just fine. I can run a search with the macro and get the email. However, when I roll out the alert and macro from our git repo I get recipients="". ... See more...
Sorry for bumping this thread, I got this working just fine. I can run a search with the macro and get the email. However, when I roll out the alert and macro from our git repo I get recipients="". The macro is there and looks correct but when the scheduled task runs and should send an email content of the macro is returned empty. The solution is still correct, I'm probably just missing something with permissions 
Hello i have done the following query but this is not affecting the result of the column Start Time on the result!  eval fields "Start Time" = strftime(strptime("Start Time", "%FT%T.%Q%Z"), "%F %... See more...
Hello i have done the following query but this is not affecting the result of the column Start Time on the result!  eval fields "Start Time" = strftime(strptime("Start Time", "%FT%T.%Q%Z"), "%F %T") | reverse When i try directly on the value of this fields whith a query  it's works! index="xxx" x | table "Interface" "Status"| stats latest("Status") as latest_time by "Interface"| eval latest_time=strftime(strptime(latest_time, "%FT%T.%Q%Z"), "%F %T") | sort 0 - latest_time i think it's more a problem with the position of the eval on the query which doesn't affect the display! thanks Laurent
We are receiving some notables that reference an encoded command being used with PowerShell, and the notable lists the command in question. The issue is that the command it is listing appears to be i... See more...
We are receiving some notables that reference an encoded command being used with PowerShell, and the notable lists the command in question. The issue is that the command it is listing appears to be incomplete when we decode the string. Does anyone know a way for us to potentially hunt down and figure out what the full encoded command referenced in the notable may be?
They are blank. I was under the impression these are for showing the contributing events. Although I am trying to get the "Incident Review" to only show those notables in the list, instead of all not... See more...
They are blank. I was under the impression these are for showing the contributing events. Although I am trying to get the "Incident Review" to only show those notables in the list, instead of all notables.
Hello splunkers, I am trying to achieve an export szenario to rapid7 in which all active directory data will be transfered to the other service. With the official guide from Splunk I can export th... See more...
Hello splunkers, I am trying to achieve an export szenario to rapid7 in which all active directory data will be transfered to the other service. With the official guide from Splunk I can export the data, but the data is not formatted in JSON. Instead every line is send by it's own, which leads that every attribute happens to be an own entry which won't help, because I can't search an log that is split into different pieces. Has anyone experience on the transfer process?
index=abc sourcetype=abc | timechart span=1m eval(count(IP)) AS TimeTaken Now I want to get 95th percentile of this total IP counts. like below. | stats perc95(TimeTaken) as Perc_95 by IP So h... See more...
index=abc sourcetype=abc | timechart span=1m eval(count(IP)) AS TimeTaken Now I want to get 95th percentile of this total IP counts. like below. | stats perc95(TimeTaken) as Perc_95 by IP So how should I write this query ?
Hi, I have a json-file in splunk with an arguments{}-field like this   field1=[content_field1] field2=[content_field2] field3=[content_field3]     splunk doesn't recognize the fields field1 et... See more...
Hi, I have a json-file in splunk with an arguments{}-field like this   field1=[content_field1] field2=[content_field2] field3=[content_field3]     splunk doesn't recognize the fields field1 etc. I assume it is because this is not really json format but I want to be sure. I can extract the files with rex but if splunk can recognize the fields automatically would be better. I think the content of the log-file should be something like this:   arguments{}:{"field1":"content_field1", "field2":"content_field2", "field3:"content_field3"}   but I want to be sure if that's the best way (because when it is the logging has to be changed). Does splunk recognize the fields automatically if events are logged in this way? Is the above mentioned the best way or are there better ways to let splunk recognize the fields automatically?    
Thank you, I had to play around with my search a bit, but this overall syntax was the trick
May I misunderstand your question, but it's simple: index= testing field1="write" field2="*@abc.com" |table field1, field2, .... if "@abc.com"  is a user name and not a domain (as I assume) you ... See more...
May I misunderstand your question, but it's simple: index= testing field1="write" field2="*@abc.com" |table field1, field2, .... if "@abc.com"  is a user name and not a domain (as I assume) you do not need to put the wildcard (*) before. If you put it, it will result in every user with @abc.com. Like, user1@abc.com, user2@abc.com... alternative: index=testing | stats count by field1 field2 | search field1="write" AND field2"*@abc.com" Regards,