All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

In order to visual a data table with 4 columns: time, resource1, resource2, duration.  I know who to do this with data coming from different events.  However in my case, all the data is stored in a s... See more...
In order to visual a data table with 4 columns: time, resource1, resource2, duration.  I know who to do this with data coming from different events.  However in my case, all the data is stored in a single performance metric splunk event. The event would look like the blob below where measureStart node contains the start time of these tasks, and the measure node contains the durations of these tasks Splunk Event: {       measureStart: {             "super_Task1: mini task1": 2021-11-12T02:50:05.430Z,            "super_Task1: mini task2": 2021-11-12T02:50:06.430Z,            "super_Task2: mini task1": 2021-11-12T02:50:07.430Z,     },     measures: {            "super_Task1: mini task1": 50,            "super_Task1: mini task2": 100,            "super_Task2: mini task1": 80,     } } I would like to produce a table that looks like this time                                                             supertasks            tasks              duration 2021-11-12T02:50:05.430Z            super_Task1        point1                  50 2021-11-12T02:50:06.430Z            super_Task1        point2                 100 2021-11-12T02:50:07.430Z            super_Task2        point1                  80 thank you very much!
Hello,  I have events with different lengths for _raw field within the same source. I would need to limit/minimize the length of _raw field in some case. How I would limit  the  maximum length of st... See more...
Hello,  I have events with different lengths for _raw field within the same source. I would need to limit/minimize the length of _raw field in some case. How I would limit  the  maximum length of string that would display for _raw field using queries from search head. Any help will be greatly appreciated, Thank you so much
I have below two JSON events where under "appliedConditionalAccessPolicies", in one event policy1 has results =failure and policy2 has results=notApplied. In the other event the values are reversed. ... See more...
I have below two JSON events where under "appliedConditionalAccessPolicies", in one event policy1 has results =failure and policy2 has results=notApplied. In the other event the values are reversed. Now I'm trying to get the event where the policy1 has the status="failure", it gives both the events   index=test | spath path="appliedConditionalAccessPolicies{}" | search "appliedConditionalAccessPolicies{}.displayName"="policy1" "appliedConditionalAccessPolicies{}.result"="failure"   I have below two JSON events where under "appliedConditionalAccessPolicies", in one event policy1 has results =failure and policy2 has results=notApplied. In the other event the values are reversed. Now I'm trying to get the event where the policy1 has the status="failure", it gives both the events index=test | spath path="appliedConditionalAccessPolicies{}" | search "appliedConditionalAccessPolicies{}.displayName"="policy1" "appliedConditionalAccessPolicies{}.result"="failure" It looks like Its searching within all the elements in the array. How can I ensure It searches both the conditions on each element of the array and return the event which has the element satisfying both the conditions. Events :   appDisplayName: App1 appId: aaaa-1111-111aeff-aad222221111 appliedConditionalAccessPolicies: [ { displayName: policy1 enforcedGrantControls: [ Block ] enforcedSessionControls: [ SignInFrequency ContinuousAccessEvaluation ] id: f111113-111-400c-a251-2123bbe4233e1 result: failure } { [-] displayName: policy2 enforcedGrantControls: [ [-] Block ] enforcedSessionControls: [ [-] ] id: sdsds-8c92-45ef-sdsds-c0b2e006d39b result: notApplied } ] appDisplayName: App1 appId: aaaa-1111-111aeff-aad222221111 appliedConditionalAccessPolicies: [ { displayName: policy1 enforcedGrantControls: [ Block ] enforcedSessionControls: [ SignInFrequency ContinuousAccessEvaluation ] id: f111113-111-400c-a251-2123bbe4233e1 result: notApplied } { [-] displayName: policy2 enforcedGrantControls: [ [-] Block ] enforcedSessionControls: [ [-] ] id: sdsds-8c92-45ef-sdsds-c0b2e006d39b result: failure } ]  
Hi, I have the search returning the event  Nov 10 23:45:3 8888888 Tra[9100]: { EventName: "Error Occurred", BatchId: 095cehcx-87ee-43f6-9663-c2fb833677a978, CorrelationId: 5fghja26b9-fe73-78cb-342b-... See more...
Hi, I have the search returning the event  Nov 10 23:45:3 8888888 Tra[9100]: { EventName: "Error Occurred", BatchId: 095cehcx-87ee-43f6-9663-c2fb833677a978, CorrelationId: 5fghja26b9-fe73-78cb-342b-5123f2ec167896, Payload: BusinessLogicException { Message: "Lead 0000000001VII6N00AX has an agency code that is not 7 digits.", Data: [], InnerException: null, TargetSite: Void Validate(uya.QueryModels.Lead), StackTrace: " at uyu.Models.Lead.Validate(Lead queriedLead)   How do i extract only the content on the Message Message: "Lead 0000000001VII6N00AX has an agency code that is not 7 digits.:"
Is there any method to automatically delete anomalies in Splunk UBA to maintain a total amount under the 1.5 million anomaly threshold?
Hello all!   install the splunk itis as the steps in the documentation in a sh cluster. But I see that the app that was installed from itsi says it essentials work and not it service intelligence.... See more...
Hello all!   install the splunk itis as the steps in the documentation in a sh cluster. But I see that the app that was installed from itsi says it essentials work and not it service intelligence. Did I do something wrong?
Hi,    I have a log file in splunk which reports the errors when ever something failed. Now i need to run a splunk query if a same error show up in Splunk more than 3 times in last 1 hour. If it hap... See more...
Hi,    I have a log file in splunk which reports the errors when ever something failed. Now i need to run a splunk query if a same error show up in Splunk more than 3 times in last 1 hour. If it happens i need to send an alert. Can someone suggest me the query with time in it?   Thanks.
Took some trial and error to figure out why some multivalue fields were being displayed as a single line. If the string "data:" appears in any values in multivalue field, for examples using stats va... See more...
Took some trial and error to figure out why some multivalue fields were being displayed as a single line. If the string "data:" appears in any values in multivalue field, for examples using stats values(x) AS x, the multivalue field will display as a single line. Are there any way to escape this behavior?     | makeresults | eval category="fruits" | eval name="apple,orange,strawberry,apricot,blueberry,mango" | eval name=SPLIT(name, ",") | mvexpand name | eval desc=name." is delicious!" | eval desc_data="data: ".name."is delicious!" | table category name desc desc_data | stats values(name) AS name values(desc) AS desc values(desc_data) AS desc_data by category       UPDATE: Thanks everyone for testing and help identify that this issue does not affect v8.1.2.  It appears to affect v8.2+  
| makeresults | eval TYPE="CHANGES,INCIDENT,PROBLEM,TYPE" | makemv TYPE delim="," |  mvexpand TYPE |appendcols [subsearch] the above one is a static column which i want to be appended at the be... See more...
| makeresults | eval TYPE="CHANGES,INCIDENT,PROBLEM,TYPE" | makemv TYPE delim="," |  mvexpand TYPE |appendcols [subsearch] the above one is a static column which i want to be appended at the beginning  of the resulting table in the subsearch . is there anything wrong with the order of the query, please help I'm new to splunk.
I am checking my nessus index in Splunk and I see that data stopped being ingested a month ago. Reviewing the account config, I am using Tenable.sc Credentials but it has (Deprecated) next to it. Is ... See more...
I am checking my nessus index in Splunk and I see that data stopped being ingested a month ago. Reviewing the account config, I am using Tenable.sc Credentials but it has (Deprecated) next to it. Is credentials no longer an option to connect to tenable to pull data? I am on the latest version of the add-on, 5.2.1 Thx
I am looking it a weird issue where I am trying to fix one of the panels in a dashboard, The panel has a query like below index=<index> sourcetype=log4j host=$host$  <Extracted field> != NULL | ... See more...
I am looking it a weird issue where I am trying to fix one of the panels in a dashboard, The panel has a query like below index=<index> sourcetype=log4j host=$host$  <Extracted field> != NULL | timechart span=1m count by <Extracted field> issue is we are getting inaccurate counts as this part "<Extracted field> != NULL"  in the above query is filtering out majority of the events, and when we are trying to see which events are filtered by using "<Extracted field> = NULL" we are not seeing any events. How does splunk treat extracted fields which are NULL or in what situations these fields end up as NULL. Any suggestions for the above issue? Thanks in advance!
Hey, has anyone created a search that merges an ipadd from threat intel and ipadd from azure so it'll trigger an alert if there's a match. Don't know if it's possible. Thanks, will appreciate any hel... See more...
Hey, has anyone created a search that merges an ipadd from threat intel and ipadd from azure so it'll trigger an alert if there's a match. Don't know if it's possible. Thanks, will appreciate any help or advise. I am new to ES  
I've set up some tables in DB Connect, using a timestamp (date_modified) as a rising column (there were no other suitable fields, and it's COTS, so the vendor isn't going to add a unique value).  The... See more...
I've set up some tables in DB Connect, using a timestamp (date_modified) as a rising column (there were no other suitable fields, and it's COTS, so the vendor isn't going to add a unique value).  The where column is "where date_modified > ?". The issue is that each time the input is run, it creates a new event for the most recent row.  I would expect that to happen is the where clause said "where date_modified >= ?", but with it set as "where date_modified > ?", I would think it would just add no rows. Has anyone else seen this?
I'm having trouble with using the where command to compare times. The search that I'm running is this:       index=jamf sourcetype=JamfModularInput "computer.general.last_contact_time_epoch"=* "c... See more...
I'm having trouble with using the where command to compare times. The search that I'm running is this:       index=jamf sourcetype=JamfModularInput "computer.general.last_contact_time_epoch"=* "computer.general.last_contact_time_epoch"!=0| dedup computer.pagination.serial_number | rename computer.general.last_contact_time_epoch as checkinepoch | eval thirtydays=relative_time(now(),"-30d") | rename computer.general.last_contact_time as "Last Check-In" | where "thirtydays">"checkinepoch" | table thirtydays,checkinepoch,"Last Check-In"   The problem I have is that it returns no results with the where command being using less than (<), and then if I use greater than (>) it returns all of the results without filtering the ones that I want. Here is an example of the output with that search: As you can see I am getting results returned where checkinepoch is larger than thirtydays. Does the where command treat the decimal in the thirtydays number as a multiplation operator (like x*y = xy)? The effect of this could be that it calculates that value as 1634051921 * 000000 = 0  Super confused by this please help!
Hello, thank you for taking the time to read and consider my question.  I'm trying to integrate a .json file which contains a list of suspicious domain's into a scheduled search that compares that... See more...
Hello, thank you for taking the time to read and consider my question.  I'm trying to integrate a .json file which contains a list of suspicious domain's into a scheduled search that compares that data with a field that contains destination urls for web traffic.  I've already designated an index and sourcetype of the suspicious urls (which will be updated daily, so neither of these are static files with any predictable or constant values).  What I'm looking to do now is basically ingest the dest_hostname field from the web traffic as well as bad_domains field from the .json file and find any matching/common fields among them.  Here's an example of the data and what I would like to accomplish: dest_hostname                bad_domain            matched_url facebook.com                   reddit.com                amazon.com amazon.com                       amazon.com           splunk.com google.com                          splunk.com  splunk.com                          nfl.com Once again, thank you for taking the time to read this, and any ideas or solutions would be greatly appreciated!
Hey all, a bit Microsoft question.... We do want to monitor windows Group Policy changes in our Domain. We have installed Splunk  Add-On and App for exchange and Active directory, and also the rele... See more...
Hey all, a bit Microsoft question.... We do want to monitor windows Group Policy changes in our Domain. We have installed Splunk  Add-On and App for exchange and Active directory, and also the relevant content-packs containing some reports about this. We do get event But..... we have also an installed and configured AGPM (Advanced group Policy management, Microsoft Software).Under the terms of that software, Microsoft Advanced Group Policy Management (AGPM) is a client/server application. The AGPM Server stores Group Policy Objects (GPOs) offline in the archive that AGPM creates on the server's file system. Group Policy administrators use the AGPM snap-in for the Group Policy Management Console (GPMC) to work with GPOs on the server that hosts the archive. and also a Few terms: Controlled GPO: A GPO that is being managed by AGPM. AGPM manages the history and permissions of controlled GPOs, which it stores in the archive. Uncontrolled GPO: A GPO in the production environment for a domain and not managed by AGPM.   When you edit a GPO using the AGPM system, you work on a copy of the original GPO. As a result, the Windows Event logs in the Domain Controllers are reporting on a different Object. Thus, the Splunk reports and event types of group policy change can't figure out which GPO is being changed (since the AGPM renames it and create a "new" one) So, after all these words....Is someone can help us find a proper application to monitor and view GPO changes via AGPM in splunk?  did someone encountered this before? Is such product exists? and if there is no other choice - help us to write new searches to catch up GPO changes in AGPM?  Thankx Auto Team
Hi All, I need splunk query to identify orders which are ordered but not submitted even after 72 hours Any one help me on this Thanks          
I have a search that displays unique users per day (based on a "user id" field). I also would like another search that displays "new" unique users per day, looking back to some fixed date. I suspect ... See more...
I have a search that displays unique users per day (based on a "user id" field). I also would like another search that displays "new" unique users per day, looking back to some fixed date. I suspect maybe I need a sub-search using "earliest" and "latest", but I don't know how to put it together.
Team Can you please provide me documentation link to learn Splunk UBA platform and related links for monitoring, developing, architecture and installation, etc.,  Thank You. 
Greetings all. I have an app, let's call it "servers" that is deployed on multiple hosts. I can see in Deployment that the app deployed OK to it's specified Server Class, let's call it "nix_servers"... See more...
Greetings all. I have an app, let's call it "servers" that is deployed on multiple hosts. I can see in Deployment that the app deployed OK to it's specified Server Class, let's call it "nix_servers". There is only one app deployed into this server class ("servers" app deployed to "nix_servers" server class). I'm trying to get a list of all hosts (clients) that have "server" app deployed on them, regardless if those hosts (clients) are in the server class "nix_servers" or not. Is there any way to get this list exported in a .csv (or any other format) file? Thanks!