All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

%U is week of year https://docs.splunk.com/Documentation/Splunk/9.2.1/SearchReference/Commontimeformatvariables#Specifying_days_and_weeks You can easily do the math to work out which week of month ... See more...
%U is week of year https://docs.splunk.com/Documentation/Splunk/9.2.1/SearchReference/Commontimeformatvariables#Specifying_days_and_weeks You can easily do the math to work out which week of month it is based on your start day of the week. See this example which calculates the week number with either a start of week being Sunday or Monday. | makeresults count=31 | streamstats c | eval _time=strptime(printf("2024-03-%02d", c), "%F") | fields - c | eval day_of_week=strftime(_time, "%A") | eval day_of_month=strftime(_time, "%d") | eval wday_sunday_start=strftime(_time, "%w"), wday_monday_start=if(wday_sunday_start=0,7,wday_sunday_start) | eval week_of_month_sunday_start=ceil(max((day_of_month-wday_sunday_start), 0) / 7) + 1 | eval week_of_month_monday_start=ceil(max((day_of_month-wday_monday_start), 0) / 7) + 1  
Good afternoon, Yes, I am most assuredly not on AWS, but running an on-premise solution.  This means that I cannot archive off to S3 buckets, which are an AWS thing (for the most part). For your su... See more...
Good afternoon, Yes, I am most assuredly not on AWS, but running an on-premise solution.  This means that I cannot archive off to S3 buckets, which are an AWS thing (for the most part). For your suggested solutions, can you point me towards the relevant documentation or add some additional details that might get me started on the right path?   My gut reaction is that option 1 is likely the solution of choice.  The Splunk configuration "props + transforms.conf" part has me scratching my head a bit, though I think I got it from the rsyslog part onward. Thanks!    
Hello. I am interested in data that occurs from Tuesday night on 8 PM until 6 AM. The caveat is that I need 2 separate time periods to compare. One of which is the 2nd Tuesday of each month until the... See more...
Hello. I am interested in data that occurs from Tuesday night on 8 PM until 6 AM. The caveat is that I need 2 separate time periods to compare. One of which is the 2nd Tuesday of each month until the 3rd Thursday. The other is any other day in the month.  So far I have:  | eval day_of_week=strftime(_time, "%A") | eval week_of_month=strftime(_time, "%U" ) | eval day_of_month=strftime(_time, "%d") | eval start_target_period=if(day_of_week=="Tuesday" AND week_of_month>1 AND week_of_month<4, "true", "false") | eval end_target_period=if(day_of_week=="Thursday" AND week_of_month>2 AND week_of_monthr<4, "true", "false") | eval hour=strftime(_time, "%H") | eval time_bucket=case( (start_target_period="true" AND hour>="20") OR (end_target_period="true" AND hour<="06"), "Target Period", (hour>="20" OR hour<="06"), "Other Period" ) My issue is that my "week of month" field is reflecting the week of the year. Any help would be greatly appreciated.  EDIT: I placed this in the wrong location, all apologies. 
Hello, Recently we replaced our Syslog server from rsyslog to syslog-ng.  We are collecting the network device's log - every source logged its own <IPaddress.log> file. Universal forwarder pushing t... See more...
Hello, Recently we replaced our Syslog server from rsyslog to syslog-ng.  We are collecting the network device's log - every source logged its own <IPaddress.log> file. Universal forwarder pushing them to the indexer.  Inputs, outputs are ok the data flowing, sourcetype is standard syslog. Everything is working as expected... Except for some sources... I spotted this because the log volume has dropped since the migration. For those, I do not have all of the events in Splunk.  I can see the file on the syslog server, let's say there are 5 events per minute. The events are the same - for example, XY port is down - but not identical; the timestamp in the header and the timestamp in the event's message are different. (events are still the same length). So in the log file, there are 5 events/min, but in Splunk, I can see only one event per 5 minutes. The rest are missing... Splunk randomly picks ~10% of the events from the log file (all the extractions are ok for those, there is no special character or something in the "dropped" events...) I feel it is because of similar events - Splunk thinks they are duplicated - but other hand it cannot be, because they are different. Any advice? Should I try to add some crc salt or try to change the sourcetype? BR. Norbert  
That works great!  Just what I was looking for. Thanks much for your support, bowesmana!
@phanTom   ok so maybe there is a way to use IN operator on values in custom fields? For exmple custom field key is department, and i want to sort for values buisness, HR 
Please find below snapshot. No listing showing for selected options.
I am new to Splunk Mission Control and assigned to demo the Splunk Cloud platform with the following features:  Incident Management: Simplifies the detection, prioritization, and response process. ... See more...
I am new to Splunk Mission Control and assigned to demo the Splunk Cloud platform with the following features:  Incident Management: Simplifies the detection, prioritization, and response process. Investigative Capabilities: Integrates diverse data sources for thorough investigations. Automated Workflows: Reduces repetitive tasks through automation. Collaboration Tools: Facilitates communication and information sharing within the SOC team. Details: Provide examples of automated workflows specific to common SOC scenarios. Can somebody provide me with links to "How to Videos and documentation to set up up my Demo. Thank You
Group the Failed counts by Event. ... [|search index=abcd "API" AND ("Couldn't save") |rex "\[ERROR\] \[API\] \[(?<ID>.+)\] \:" |dedup ID |stats count as Failed by Event ] ... You could do the same... See more...
Group the Failed counts by Event. ... [|search index=abcd "API" AND ("Couldn't save") |rex "\[ERROR\] \[API\] \[(?<ID>.+)\] \:" |dedup ID |stats count as Failed by Event ] ... You could do the same thing without a join for better performance. index=abcd ("API : access : * : process : Payload:") OR ("Couldn't save") |eval status = if(searchmatch("Couldn't save"), "Failed", "Success") |stats count as Total, sum(eval(status="Failed")) as Failed by Event |eval Success=Total-Failed |table Event Total Success Failed
Dot net agent status is at 100% after deleting .Net and Machine agent. All the servers were rebooted and checked on the server for AppD related services, folders. They were all removed.  Could this ... See more...
Dot net agent status is at 100% after deleting .Net and Machine agent. All the servers were rebooted and checked on the server for AppD related services, folders. They were all removed.  Could this be related to old data still reflecting on AppD controller ? 
I use appdynamics to send a daily report on Slow or failed transactions and while the email digest report is helpful is there a way to include more detailed information about the data collectors (nam... See more...
I use appdynamics to send a daily report on Slow or failed transactions and while the email digest report is helpful is there a way to include more detailed information about the data collectors (name and value) in the email digest report?  Is this something done using custom email templates?
index=abcd "API : access : * : process : Payload:" |rex "\[INFO \] \[.+\] \[(?<ID>.+)\] \:" |rex " access : (?<Event>.+) : process" |stats count as Total by Event |join type=inner ID [|search index=a... See more...
index=abcd "API : access : * : process : Payload:" |rex "\[INFO \] \[.+\] \[(?<ID>.+)\] \:" |rex " access : (?<Event>.+) : process" |stats count as Total by Event |join type=inner ID [|search index=abcd "API" AND ("Couldn't save") |rex "\[ERROR\] \[API\] \[(?<ID>.+)\] \:" |dedup ID |stats count as Failed ] |eval Success=Total-Failed |stats values(Total),values(Success),values(Failed) by Event Event values(Total) values(Success) values(Failed) Event1 76303 76280 23 Event2 4491 4468 23 Event3 27140 27117 23 Event4 118305 118282 23 Event5 318810 318787 23 Event6 9501 9478 23 I am trying to join to different search (index is common) on ID field and then trying to group them by "Event" field but the Failed column is showing the same value for all the events.
| dedup workOrderId Status
Base on my complete solution is there a way to remove duplicates based on two values (workOrderId and Status) before aggregating ? from "| spath input=content path=workOrderId" index="wcnp_acc-omni... See more...
Base on my complete solution is there a way to remove duplicates based on two values (workOrderId and Status) before aggregating ? from "| spath input=content path=workOrderId" index="wcnp_acc-omni" "*acc-omni-service-prod*" | spath path=log.content output=content | eval content=json_array_to_mv(content) | mvexpand content | spath input=content path=status | spath input=content path=serviceCart.serviceItems{}.serviceType output=serviceType | eval created=if(serviceType="OIL_AND_LUBE" AND status="CREATED", 1, 0) | eval completed=if(serviceType="OIL_AND_LUBE" AND status="SERVICE_COMPLETE", 1, 0) | where completed > 0 OR created > 0 | stats sum(created) as createdTotal, sum(completed) as completedTotal | eval total = (completedTotal/createdTotal) * 100 | table total, createdTotal, completedTotal | rename total as "Total Completion Rate Oil/Lube" createdTotal as "Total Created" completedTotal as "Total Completed"
is there playbook for this kind of thing? playbook "user password policy enforcement "
Hello. I'm using the trial and following the instructions for sending to APM with a manually instrumented Python app as seen below:       apiVersion: apps/v1 kind: Deployment spec: selector: ... See more...
Hello. I'm using the trial and following the instructions for sending to APM with a manually instrumented Python app as seen below:       apiVersion: apps/v1 kind: Deployment spec: selector: matchLabels: app: your-application template: spec: containers: - name: myapp env: - name: SPLUNK_OTEL_AGENT valueFrom: fieldRef: fieldPath: status.hostIP - name: OTEL_EXPORTER_OTLP_ENDPOINT value: "http://$(SPLUNK_OTEL_AGENT):4317" - name: OTEL_SERVICE_NAME value: "blah" - name: OTEL_RESOURCE_ATTRIBUTES value: "service.version=1"         If I'm using the Splunk distribution of the otel collector, how can I get the dns name of the `OTEL_EXPORTER_OTLP_ENDPOINT` without having to use `status.HostIp`? 
Hi @Roy_9 , what is this kind of logs? is there an add-on for these logs? if they are text files, you can ingest in Splunk, but I never saw them, so you have to create your parsing rules. Ciao. ... See more...
Hi @Roy_9 , what is this kind of logs? is there an add-on for these logs? if they are text files, you can ingest in Splunk, but I never saw them, so you have to create your parsing rules. Ciao. Giuseppe
Hi @onthakur, you have to categorize the events: if LOG1, LOG2 and LOG3 have different sourcetypes (or something else to recognize them), the Event is a field that you already extracted, Correl... See more...
Hi @onthakur, you have to categorize the events: if LOG1, LOG2 and LOG3 have different sourcetypes (or something else to recognize them), the Event is a field that you already extracted, CorrelationID is a common key between the three logs, success is an action when you have the message "record completed", error is an action when you have the message "Couldn't save the SubscribersSettings record in DB", remember that you cannot have more columns in timechart, so you must use stats. you could create a search like the following: index=your_index sourcetype IN (LOG1, LOG2, LOG3) | bin span=1h _time | stats values(Event) AS Event count AS Total_Count count(eval(searchmatch("record completed") AS "success" count(eval(searchmatch("Couldn't save the SubscribersSettings record in DB") AS "Error" BY _time CorrelationID  Adapt the search to your conditions. Ciao. Giuseppe
Unfortunately I get 0 results...
Your requirement is not completely clear - which time do you want? can there be multiple entries in any of the logs for the same transaction id? if there are multiples, how do you want these counted?... See more...
Your requirement is not completely clear - which time do you want? can there be multiple entries in any of the logs for the same transaction id? if there are multiples, how do you want these counted? do you actually need log2 since every transaction is in log1 (giving you the total) and errors are in log3 so successful is the difference between these two count?