All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Please find below snapshot. No listing showing for selected options.
I am new to Splunk Mission Control and assigned to demo the Splunk Cloud platform with the following features:  Incident Management: Simplifies the detection, prioritization, and response process. ... See more...
I am new to Splunk Mission Control and assigned to demo the Splunk Cloud platform with the following features:  Incident Management: Simplifies the detection, prioritization, and response process. Investigative Capabilities: Integrates diverse data sources for thorough investigations. Automated Workflows: Reduces repetitive tasks through automation. Collaboration Tools: Facilitates communication and information sharing within the SOC team. Details: Provide examples of automated workflows specific to common SOC scenarios. Can somebody provide me with links to "How to Videos and documentation to set up up my Demo. Thank You
Group the Failed counts by Event. ... [|search index=abcd "API" AND ("Couldn't save") |rex "\[ERROR\] \[API\] \[(?<ID>.+)\] \:" |dedup ID |stats count as Failed by Event ] ... You could do the same... See more...
Group the Failed counts by Event. ... [|search index=abcd "API" AND ("Couldn't save") |rex "\[ERROR\] \[API\] \[(?<ID>.+)\] \:" |dedup ID |stats count as Failed by Event ] ... You could do the same thing without a join for better performance. index=abcd ("API : access : * : process : Payload:") OR ("Couldn't save") |eval status = if(searchmatch("Couldn't save"), "Failed", "Success") |stats count as Total, sum(eval(status="Failed")) as Failed by Event |eval Success=Total-Failed |table Event Total Success Failed
Dot net agent status is at 100% after deleting .Net and Machine agent. All the servers were rebooted and checked on the server for AppD related services, folders. They were all removed.  Could this ... See more...
Dot net agent status is at 100% after deleting .Net and Machine agent. All the servers were rebooted and checked on the server for AppD related services, folders. They were all removed.  Could this be related to old data still reflecting on AppD controller ? 
I use appdynamics to send a daily report on Slow or failed transactions and while the email digest report is helpful is there a way to include more detailed information about the data collectors (nam... See more...
I use appdynamics to send a daily report on Slow or failed transactions and while the email digest report is helpful is there a way to include more detailed information about the data collectors (name and value) in the email digest report?  Is this something done using custom email templates?
index=abcd "API : access : * : process : Payload:" |rex "\[INFO \] \[.+\] \[(?<ID>.+)\] \:" |rex " access : (?<Event>.+) : process" |stats count as Total by Event |join type=inner ID [|search index=a... See more...
index=abcd "API : access : * : process : Payload:" |rex "\[INFO \] \[.+\] \[(?<ID>.+)\] \:" |rex " access : (?<Event>.+) : process" |stats count as Total by Event |join type=inner ID [|search index=abcd "API" AND ("Couldn't save") |rex "\[ERROR\] \[API\] \[(?<ID>.+)\] \:" |dedup ID |stats count as Failed ] |eval Success=Total-Failed |stats values(Total),values(Success),values(Failed) by Event Event values(Total) values(Success) values(Failed) Event1 76303 76280 23 Event2 4491 4468 23 Event3 27140 27117 23 Event4 118305 118282 23 Event5 318810 318787 23 Event6 9501 9478 23 I am trying to join to different search (index is common) on ID field and then trying to group them by "Event" field but the Failed column is showing the same value for all the events.
| dedup workOrderId Status
Base on my complete solution is there a way to remove duplicates based on two values (workOrderId and Status) before aggregating ? from "| spath input=content path=workOrderId" index="wcnp_acc-omni... See more...
Base on my complete solution is there a way to remove duplicates based on two values (workOrderId and Status) before aggregating ? from "| spath input=content path=workOrderId" index="wcnp_acc-omni" "*acc-omni-service-prod*" | spath path=log.content output=content | eval content=json_array_to_mv(content) | mvexpand content | spath input=content path=status | spath input=content path=serviceCart.serviceItems{}.serviceType output=serviceType | eval created=if(serviceType="OIL_AND_LUBE" AND status="CREATED", 1, 0) | eval completed=if(serviceType="OIL_AND_LUBE" AND status="SERVICE_COMPLETE", 1, 0) | where completed > 0 OR created > 0 | stats sum(created) as createdTotal, sum(completed) as completedTotal | eval total = (completedTotal/createdTotal) * 100 | table total, createdTotal, completedTotal | rename total as "Total Completion Rate Oil/Lube" createdTotal as "Total Created" completedTotal as "Total Completed"
is there playbook for this kind of thing? playbook "user password policy enforcement "
Hello. I'm using the trial and following the instructions for sending to APM with a manually instrumented Python app as seen below:       apiVersion: apps/v1 kind: Deployment spec: selector: ... See more...
Hello. I'm using the trial and following the instructions for sending to APM with a manually instrumented Python app as seen below:       apiVersion: apps/v1 kind: Deployment spec: selector: matchLabels: app: your-application template: spec: containers: - name: myapp env: - name: SPLUNK_OTEL_AGENT valueFrom: fieldRef: fieldPath: status.hostIP - name: OTEL_EXPORTER_OTLP_ENDPOINT value: "http://$(SPLUNK_OTEL_AGENT):4317" - name: OTEL_SERVICE_NAME value: "blah" - name: OTEL_RESOURCE_ATTRIBUTES value: "service.version=1"         If I'm using the Splunk distribution of the otel collector, how can I get the dns name of the `OTEL_EXPORTER_OTLP_ENDPOINT` without having to use `status.HostIp`? 
Hi @Roy_9 , what is this kind of logs? is there an add-on for these logs? if they are text files, you can ingest in Splunk, but I never saw them, so you have to create your parsing rules. Ciao. ... See more...
Hi @Roy_9 , what is this kind of logs? is there an add-on for these logs? if they are text files, you can ingest in Splunk, but I never saw them, so you have to create your parsing rules. Ciao. Giuseppe
Hi @onthakur, you have to categorize the events: if LOG1, LOG2 and LOG3 have different sourcetypes (or something else to recognize them), the Event is a field that you already extracted, Correl... See more...
Hi @onthakur, you have to categorize the events: if LOG1, LOG2 and LOG3 have different sourcetypes (or something else to recognize them), the Event is a field that you already extracted, CorrelationID is a common key between the three logs, success is an action when you have the message "record completed", error is an action when you have the message "Couldn't save the SubscribersSettings record in DB", remember that you cannot have more columns in timechart, so you must use stats. you could create a search like the following: index=your_index sourcetype IN (LOG1, LOG2, LOG3) | bin span=1h _time | stats values(Event) AS Event count AS Total_Count count(eval(searchmatch("record completed") AS "success" count(eval(searchmatch("Couldn't save the SubscribersSettings record in DB") AS "Error" BY _time CorrelationID  Adapt the search to your conditions. Ciao. Giuseppe
Unfortunately I get 0 results...
Your requirement is not completely clear - which time do you want? can there be multiple entries in any of the logs for the same transaction id? if there are multiples, how do you want these counted?... See more...
Your requirement is not completely clear - which time do you want? can there be multiple entries in any of the logs for the same transaction id? if there are multiples, how do you want these counted? do you actually need log2 since every transaction is in log1 (giving you the total) and errors are in log3 so successful is the difference between these two count?
Hello, Does the below log paths of windows logs can be ingested into Splunk and if this is available in any add-on's? Microsoft\Windows\Privacy-Auditing\Operational EventLog Thanks
Hi @vstan, ok, please try this: index="ABC" (sourcetype="SourceA" OR sourcetype="SourceB") | eval User=coalesce(user,User) | stats values(TOTAL_ATTACHMENT_SIZE_SEGMENT) AS Total_Bytes_Size ... See more...
Hi @vstan, ok, please try this: index="ABC" (sourcetype="SourceA" OR sourcetype="SourceB") | eval User=coalesce(user,User) | stats values(TOTAL_ATTACHMENT_SIZE_SEGMENT) AS Total_Bytes_Size values(EMAIL_ADDRESS) AS EMAIL_ADDRESS BY User | sort - Total_Bytes_Size the error was for the space after sum. Ciao. Giuseppe
Team, I got 3 logs, I need to fetch Transaction_id,Event and Total_Count from LOG1. After that I need to join the 3 logs to get Successfull and Failures. successfull transaction will have only LOG2... See more...
Team, I got 3 logs, I need to fetch Transaction_id,Event and Total_Count from LOG1. After that I need to join the 3 logs to get Successfull and Failures. successfull transaction will have only LOG2. Failure transactions will have both LOG2 and LOG3 Finally I need data in timechart (span=1h). _time Event Total_Count Successfull Error LOG1 = 024-05-29 12:35:49.288 [INFO ] [Transaction_id] : servicename : access : Event : process : Payload: LOG2 = 2024-05-29 12:11:09.226 [INFO ] [Transaction_id] : application_name : report : servicename (Async) : DB save for SubscribersSettingsAudit record completed in responseTime=2 ms LOG3 = 2024-05-24 11:25:36.307 [ERROR] [Transaction_id] : application_name : regular : servicename (Async) : Couldn't save the SubscribersSettings record in DB
Hi @gcusello    My data is already 'summed' ->  This is how it is stored.  TOTAL_ATTACHMENT_SIZE_SEGMENT 5-25MB   When I try to run the query you provided it gives me an error: Error in '... See more...
Hi @gcusello    My data is already 'summed' ->  This is how it is stored.  TOTAL_ATTACHMENT_SIZE_SEGMENT 5-25MB   When I try to run the query you provided it gives me an error: Error in 'stats' command: The argument '(TOTAL_ATTACHMENT_SIZE_SEGMENT)' is invalid.
Hi @pc591f, check if the add-ons I mentioned are installed and if the inputs that takes the information you need are enabled. If yes, you have only to create your searches. if not, you haven't the... See more...
Hi @pc591f, check if the add-ons I mentioned are installed and if the inputs that takes the information you need are enabled. If yes, you have only to create your searches. if not, you haven't the information for your Use Cases. Ciao. Giuseppe
Hi Gcusello Thanks for the information, Forwarders are installed on all servers currently, its just setting up the searches are my colleague is away for the week and i just trying to set up some bas... See more...
Hi Gcusello Thanks for the information, Forwarders are installed on all servers currently, its just setting up the searches are my colleague is away for the week and i just trying to set up some basic alerts, thanks for your advice