All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Under that subject line, the detail says: You do not have necessary authorization to access and use this application : App Content Manager. Access to all of its features has been restricted. If y... See more...
Under that subject line, the detail says: You do not have necessary authorization to access and use this application : App Content Manager. Access to all of its features has been restricted. If you believe this is in error, or if you require access for a specific reason, please reach out to your Splunk administrator for further assistance. But I am the Splunk Admin. This app is quite new and not supported by Splunk. So I am trying to get the authors' insights or anyone who has experiences with it. Much appreciated!
I have a customer that want to disable alerting Mon-Fri 5PM - 6AM and All day Sat-Sun. I appears that I can only have one schedule per Health Rule.   Is it possible to have multiple schedules per H... See more...
I have a customer that want to disable alerting Mon-Fri 5PM - 6AM and All day Sat-Sun. I appears that I can only have one schedule per Health Rule.   Is it possible to have multiple schedules per Health Rule? Thanks, S/
I'm a bit new to Splunk; apologies if I miss anything obvious. I'm looking to selectively block events meeting a certain criteria from being indexed.  Here's the current setup: Splunk Universal Fo... See more...
I'm a bit new to Splunk; apologies if I miss anything obvious. I'm looking to selectively block events meeting a certain criteria from being indexed.  Here's the current setup: Splunk Universal Forwarder 9.1.4.0 Windows Server 2019 And the conf: & 'C:\Program Files\SplunkUniversalForwarder\bin\btool.exe' inputs list ... [WinEventLog://Security] blacklist1 = REDACTED blacklist2 = EventCode="4688" Message="New Process Name: (?i)C:\\Program Files\\Splunk(?:UniversalForwarder)?\\bin\\(?:btool|splunkd|splunk|splunk-(?:MonitorNoHandle|admon|netmon|perfmon|powershell|regmon|winevtlog|winhostinfo|winprintmon|wmi)).exe" blacklist3 = REDACTED disabled = 0 evt_dc_name = evt_dns_name = evt_resolve_ad_obj = 0 host = REDACTED index = REDACTED interval = 60 ...   Now here's what I see: No errors around processing this blacklist (if I use an invalid regex, it grumbles) So many splunk process events.  So many. Not clear on why this blacklist is not working.  Any suggestions? In Splunk, if I show source for the log, I get this: 06/18/2024 01:49:56 PM LogName=Security EventCode=4688 EventType=0 ComputerName=REDACTED SourceName=Microsoft Windows security auditing. Type=Information RecordNumber=3063451653 Keywords=Audit Success TaskCategory=Process Creation OpCode=Info Message=A new process has been created. Creator Subject: Security ID: S-1-5-18 Account Name: REDACTED Account Domain: REDACTED Logon ID: 0x3E7 Target Subject: Security ID: S-1-0-0 Account Name: - Account Domain: - Logon ID: 0x0 Process Information: New Process ID: 0x1e4c New Process Name: C:\Program Files\SplunkUniversalForwarder\bin\splunk-powershell.exe Token Elevation Type: %%1936 Mandatory Label: S-1-16-16384 Creator Process ID: 0x35e4 Creator Process Name: C:\Program Files\SplunkUniversalForwarder\bin\splunkd.exe Process Command Line: "C:\Program Files\SplunkUniversalForwarder\bin\splunk-powershell.exe" --ps2 Token Elevation Type indicates the type of token that was assigned to the new process in accordance with User Account Control policy. Type 1 is a full token with no privileges removed or groups disabled. A full token is only used if User Account Control is disabled or if the user is the built-in Administrator account or a service account. Type 2 is an elevated token with no privileges removed or groups disabled. An elevated token is used when User Account Control is enabled and the user chooses to start the program using Run as administrator. An elevated token is also used when an application is configured to always require administrative privilege or to always require maximum privilege, and the user is a member of the Administrators group. Type 3 is a limited token with administrative privileges removed and administrative groups disabled. The limited token is used when User Account Control is enabled, the application does not require administrative privilege, and the user does not choose to start the program using Run as administrator.   And finally, if I match that source, to the regex string, it matches, which... should that not mean the event would be blacklisted?  Is there any debug level logs / tooling I should check that might reveal what this is actually doing/not doing?  It seems like it should "just work", but, again, I am quite new with Splunk. Thanks for any help, and apologies if this is something obvious that I have missed!
Hi all -  I am trying to create what I would think is a relatively simple conditional statement in Splunk.  Use Case:  I merely want to know if a job has passed or failed; the only thing that is... See more...
Hi all -  I am trying to create what I would think is a relatively simple conditional statement in Splunk.  Use Case:  I merely want to know if a job has passed or failed; the only thing that is maybe tricky about this is the only message we get for pass or fail look like:  msg.message="*Work Flow Passed | for endpoint XYZ*" OR msg.message="*STATUS - FAILED*" I have tried to create a conditional statement based on the messaging but I either return NULL value or the wrong value.  If I try:   index=*app_pcf cf_app_name="mddr-batch-integration-flow" msg.message="*Work Flow Passed | for endpoint XYZ*" OR msg.message="*STATUS - FAILED*" | eval Status=if('message.msg'="*Work Flow Passed | for endpoint XYZ*","SUCCESS", "FAIL") | table _time, Status    Then it just shows Status as FAIL (which, i know is objectively wrong because the only message produced for this event is "work flow passed..." which should induce a TRUE value and display "SUCCESS"). If I try another way:    index=*app_pcf cf_app_name="mddr-batch-integration-flow" msg.message="*Work Flow Passed | for endpoint XYZ*" OR msg.message="*STATUS - FAILED*" | eval Status=case(msg.message="*Work Flow Passed | for endpoint XYZ*", "SUCCESS", msg.message="*STATUS - FAILED*", "FAIL") | table _time, Status   I receive NULL value for the STATUS field...  If it helps, this is how the event looks when i don't add any conditional statement or table: How can I fix this?? Thanks! 
I have a scheduled search/alert.  It validates that for every Splunk event of type A, there is a type B.  If it doesn't see a corresponding B, it will alert.  Occasionally I am getting false alerts b... See more...
I have a scheduled search/alert.  It validates that for every Splunk event of type A, there is a type B.  If it doesn't see a corresponding B, it will alert.  Occasionally I am getting false alerts because Splunk is not able to reach one or more indexers.  I'll see the message "The following error(s) occurred while the search ran. Therefore, search results might be incomplete. " along with additional details.  That means the search doesn't get back all the events, which will include a type B event and cause a false alert to fire.  Since Splunk knows it wasn't able to communicate to all the indexers, I'd like to abort the search.  Is there anything sort of like the "addinfo" command were I can add information about whether getting all the data was successful so that I can do a where clause on it and remove all my rows if there were errors?  How can I prevent an alert from firing if I didn't get all the results back from the indexers?
I need to filter a part of a log using regex, I have the following log log: {dx.trace_id=xxxxx, dx.span_id=yyyyy, dx.trace_sampled=true}{"logtopic":"x","appname":"y","module":"z","Id":"asdasd","trac... See more...
I need to filter a part of a log using regex, I have the following log log: {dx.trace_id=xxxxx, dx.span_id=yyyyy, dx.trace_sampled=true}{"logtopic":"x","appname":"y","module":"z","Id":"asdasd","traceId":"aaaaaaa","parentId":"sssssss","spanId":"ddddddd","traceFlags":"00","timestamp":"2024-05-29 11:42:37.675","event":"POST:geAll","level":"info","payload":{"orderId":"yyyy","channel":"zzz","skupCheck":true},"msgResponse":{"httpMethod":"POST","httpStatusCode":200,"httpMessage":"OK","url":"getAll"},"message":"Response in POST:getAll"} I need to remove this fragment from the answer {dx.trace_id=xxxxx, dx.span_id=yyyyy, dx.trace_sampled=true} so that the visible log is the following log: {"logtopic":"x","appname":"y","module":"z","Id":"asdasd","traceId":"aaaaaaa","parentId":"sssssss","spanId":"ddddddd","traceFlags":"00","timestamp":"2024-05-29 11:42:37.675","event":"POST:geAll","level":"info","payload":{"orderId":"yyyy","channel":"zzz","skupCheck":true},"msgResponse":{"httpMethod":"POST","httpStatusCode":200,"httpMessage":"OK","url":"getAll"},"message":"Response in POST:getAll"} There are also outputs where what I need to filter is presented with fewer fields or without fields, leaving it this way log: {dx.trace_sampled=true}{"logtopic":"x","appname":"y","module":"z","Id":"asdasd","traceId":"aaaaaaa","parentId":"sssssss","spanId":"ddddddd","traceFlags":"00","timestamp":"2024-05-29 11:42:37.675","event":"POST:geAll","level":"info","payload":{"orderId":"yyyy","channel":"zzz","skupCheck":true},"msgResponse":{"httpMethod":"POST","httpStatusCode":200,"httpMessage":"OK","url":"getAll"},"message":"Response in POST:getAll"} log: {}{"logtopic":"x","appname":"y","module":"z","Id":"asdasd","traceId":"aaaaaaa","parentId":"sssssss","spanId":"ddddddd","traceFlags":"00","timestamp":"2024-05-29 11:42:37.675","event":"POST:geAll","level":"info","payload":{"orderId":"yyyy","channel":"zzz","skupCheck":true},"msgResponse":{"httpMethod":"POST","httpStatusCode":200,"httpMessage":"OK","url":"getAll"},"message":"Response in POST:getAll"} In these last two examples I still need to filter the following respectively {dx.trace_sampled=true} {} So that the output is finally clean and leaves only what I need log: {"logtopic":"x","appname":"y","module":"z","Id":"asdasd","traceId":"aaaaaaa","parentId":"sssssss","spanId":"ddddddd","traceFlags":"00","timestamp":"2024-05-29 11:42:37.675","event":"POST:geAll","level":"info","payload":{"orderId":"yyyy","channel":"zzz","skupCheck":true},"msgResponse":{"httpMethod":"POST","httpStatusCode":200,"httpMessage":"OK","url":"getAll"},"message":"Response in POST:getAll"} I hope you can help me please  
I am new to splunk and  observing the event count and current size showing a 0, even though we can search on the index and have data . Any insights will be helpful.
Hi, I am looking to setup an alert which support to be run every weekday at 7:30PM. Search window for alert query should be from 7PM previous day to 7PM current day. How can I setup this alert. ... See more...
Hi, I am looking to setup an alert which support to be run every weekday at 7:30PM. Search window for alert query should be from 7PM previous day to 7PM current day. How can I setup this alert. Thanks
Thank you everyone for taking the time to ready this. I am new in Splunk and interested in learning more. I have a project at home, and this has to do with viewing authentication traffic on a given n... See more...
Thank you everyone for taking the time to ready this. I am new in Splunk and interested in learning more. I have a project at home, and this has to do with viewing authentication traffic on a given network The challenge I face: I need to view what authentication method is being used to access what resource on the network for a giving index and sourcetype. For example, Windows systems do not have an attribute solo representing if the access to the Nod was SSO or MFA all I get is an event ID 4624. Windows Event ID 4624, successful logon — Dummies guide, 3 minute read (manageengine.com) My understanding is that I have to gather a few attributes and make an educated guess about what access was used. I was hoping to find a one liner lol that will show me what resource is using what authentication method. Any help would be appreciated and virtual drinks on me if we strike gold
I asked in a previous thread for help to get response time based on time differential between two events connected by a UUID (Solved: Re: Measuring time difference between 2 entries - Splunk Communit... See more...
I asked in a previous thread for help to get response time based on time differential between two events connected by a UUID (Solved: Re: Measuring time difference between 2 entries - Splunk Community) which is working perfectly. I turned that into an average response time grouped by a particular transaction type (processName) and thats working fine as well, but I would very much like to use this as a timechart - but I can't seem to get it working. From what I understand, the fact that I am using Stats stripts out the _time which the timechart uses, but I am not sure how to work around that. My query goes as follows: [My search here] | stats earliest(eval(if(eventType="BEGIN",_time,""))) AS Begin_time latest(eval(if(eventType="END",_time,""))) AS End_time BY UUID processName | eval ResponseTime=End_time-Begin_time | stats avg(ResponseTime) by processName I've tried a number of things that didn't work, including changing stats to: | timechart span=10m Avg(ResponseTime) by processName While this did perform a search, it generated no result whatsoever. Won't bore everyone with my multiple failures. My query gives me basically ProcessName Avg(Response_time) Process1 0.5 Process2 0.6 Process3 0.7   My goal is to get this as a time chart visualization with a span of 10 mins. Any suggestions ? Thanks
Hi, I have a search as below. I want to find count of recipients by action where how many users received the email vs not for every event   index=a sourcetype="a" | bucket span=4h _time | stats... See more...
Hi, I have a search as below. I want to find count of recipients by action where how many users received the email vs not for every event   index=a sourcetype="a" | bucket span=4h _time | stats values(action) as email_action,values(Sender) as Sender,dc(sender_email) as Sender_email_count,values(subject) as subject,dc(URL) as url_count, values(URL) as urls,values(filename) as files,values(recipients_list) as recipients_list by sender_name,_time | search (subject="*RE:*")    Any help would be appreciated.. thank you!
Coming from SQL, I want to do stuff like GROUP BY and HAVING ... The data is available with a transaction identifier.Grouing should be done by that transaction identifier. Per transaction, I want t... See more...
Coming from SQL, I want to do stuff like GROUP BY and HAVING ... The data is available with a transaction identifier.Grouing should be done by that transaction identifier. Per transaction, I want to check a few attributes, if their values are unique within each treansaction. In SQL terms: select transaction_id from index group by transaction_id having count(distinct attr1) = 1 and count(distinct attr2) = 1 and count(distinct attr3) = 1 From that table of transaction_ids, a join to the same index should be done to filter the events. How can I achieve this with Splunk query?  
I can not access to Security Content dashboard because I recieve this message:  
I have been trying to get the following sourcetype into Splunk for PI.  This whole stanza should go in as 1 event, but I've been unable to get the breakdown to multiple events from happening: { "Pa... See more...
I have been trying to get the following sourcetype into Splunk for PI.  This whole stanza should go in as 1 event, but I've been unable to get the breakdown to multiple events from happening: { "Parameters": null, "ID": 2185, "TimeStamp": "\/Date(1718196855107)\/", "Message": "User query failed: Connection ID: 55, User: xxxxx, User ID: 1, Point ID: 247000, Type: summary, Start: 12-Jun-24 08:52:45, End: 12-Jun-24 08:54:15, Mode: 5, Status: [-11059] No Good Data For Calculation", "ProgramName": "sssssss", "Category": null, "OriginatingHost": null, "OriginatingOSUser": null, "OriginatingPIUser": null, "ProcessID": 5300, "Priority": 10, "ProcessHost": null, "ProcessOSUser": "SYSTEM", "ProcessPIUser": null, "Source1": "piarcset", "Source2": "Historical", "Source3": null, "SplunkTime": "1718196855.10703", "Severity": "Warning" }, I have even tried using the _json defaulted with Splunk, but it keeps breaking it into multiple lines/events.  Any suggestions would be helpful.  
How to find difference of the time in days and hours respectively between Event time of the data and current time? Format of the Time i.e _time is below 6/18/24 10:17:15.000 AM I tried utiliz... See more...
How to find difference of the time in days and hours respectively between Event time of the data and current time? Format of the Time i.e _time is below 6/18/24 10:17:15.000 AM I tried utilizing the below query which is giving me current event time and current server time in correctly but I need help in finding the difference. index=testdata sourcetype=testmydata | eval currentEventTime=strftime(_time,"%+") |eval currentTimeintheServer= strftime(now(),"%+") | eval diff=round(('currentTimeintheServer'-'currentEventTime') / 60) | eval diff = tostring(diff, "duration") |table currentEventTime currentTimeintheServer diff index _raw Please assist.
Hello Sir/Madam, I am using the on-permise version of the Appdynamics platform. I'm going to check the last features of the EUM component for a web application. While checking the EUM data, everyth... See more...
Hello Sir/Madam, I am using the on-permise version of the Appdynamics platform. I'm going to check the last features of the EUM component for a web application. While checking the EUM data, everything is OK but the page 'Experience Journey Map' shows the following error message. What is the problem root cause? Http failure response for https://mydomain/controller/restui/eum/common/userJourneyUiService/getPathBasedWebUserJourneyTree: 500 OK Morever, the follwoing error occures in the controller log. You can find the full stack trace of the exception in the attached file. [#|2024-06-18T05:37:09.855-0500|WARNING|glassfish 4.1|com.singularity.ee.controller.beans.eumcloud.EUMCloudManagerImpl|_ThreadID=31;_ThreadName=http-listener-1(3);_TimeMillis=1718707029855;_LevelValue=900;|Failed to fetch EUM web user journey for account333356-ss-Mpco-vghtqr2am7n4 com.appdynamics.eum.rest.client.exception.TransportException: Communication failure with service (http://myip:7001/userjourney/v3/web com.appdynamics.eum.client.deps.com.fasterxml.jackson.databind.exc.UnrecognizedPropertyException: Unrecognized field "pageUrlPrefix" (class com.appdynamics.eum.platform.userjourney.query.api.query.PathBasedTreeNode), not marked as ignorable (10 known properties: "outgoingMap", "aggregatedOutgoingQoSList", "outgoingCount", "outgoingNodeDetails", "parentIdString", "incomingCount", "incomingNodeDetails", "name", "nodeIdString", "levelCount"]) at [Source: (com.appdynamics.eum.client.deps.org.glassfish.jersey.message.internal.ReaderInterceptorExecutor$UnCloseableInputStream); line: 1, column: 92] (through reference chain: com.appdynamics.eum.platform.userjourney.query.api.result.PathBasedUserJourneyTree["nodes"]->java.util.ArrayList[0]->com.appdynamics.eum.platform.userjourney.query.api.query.PathBasedTreeNode["pageUrlPrefix"]).|#] [#|2024-06-18T05:37:09.860-0500|SEVERE|glassfish 4.1|com.appdynamics.controller.persistence.ControllerExceptionHandlingInterceptor|_ThreadID=31;_ThreadName=http-listener-1(3);_TimeMillis=1718707029860;_LevelValue=1000;|SERVICE=CONTROLLER_BEANS MODULE=CONTROLLER LOGID=ID000401 Encountered a server exception com.singularity.ee.controller.beans.eumcloud.UserFriendlyServerException: Internal server error. See server.log for details. at com.singularity.ee.controller.beans.eumcloud.EUMCloudManagerImpl.queryPathBasedWebUserJourneyTree(EUMCloudManagerImpl.java:3767) at com.appdynamics.platform.persistence.TransactionInterceptor.lambda$invoke$0(TransactionInterceptor.java:37) at com.singularity.ee.controller.beans.model.EJBManagerBean.runWithinRequiredTransaction(EJBManagerBean.java:17) .....
Hi I am using this app https://splunkbase.splunk.com/app/3120 Is it possible to keep the X-axis in view when scrolling down? 1St Example we can see the X-Axis 2nd After moving down - it is g... See more...
Hi I am using this app https://splunkbase.splunk.com/app/3120 Is it possible to keep the X-axis in view when scrolling down? 1St Example we can see the X-Axis 2nd After moving down - it is gone. I know I can hover over time, so like Excel, can we stick the Row. So it moves with the scroll? Thanks in advance Robert   
Hi Team, We are currently using Classic XML and have made the panels Collapsible/Expandable using HTML/CSS with suggestion from below thread: https://community.splunk.com/t5/Dashboards-Visualizat... See more...
Hi Team, We are currently using Classic XML and have made the panels Collapsible/Expandable using HTML/CSS with suggestion from below thread: https://community.splunk.com/t5/Dashboards-Visualizations/How-to-add-feature-expand-or-collapse-panel-in-dashboard-using/m-p/506986 However, sometimes during first dashboard load, both the "+" and "-" sign are visible. This happens occasionally, so I am not able to find the cause for this. Do you have any suggestions or ideas to fix this? Thank you!
I've seen the documentation which says "by default subsearches return a maximum of 10,000 results and have a maximum runtime of 60 seconds", but it's unclear if that limit is before or after applying... See more...
I've seen the documentation which says "by default subsearches return a maximum of 10,000 results and have a maximum runtime of 60 seconds", but it's unclear if that limit is before or after applying transforms.   e.g. does it apply to the base search (e.g. the output of index=wineventlogs AND ComputerName=MyDesktop is capped at 10k) or if the filtered results (e.g. if I add conditions and filter to reduce the final dataset) is where any results over 10k will be dropped?