All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Can you please provide me the solution for this issue?
Hi Team, I have a table which has counts for these attributes Re-ProcessRequest count,objectType,objectIdsCount,uniqObjectIdsCount,sqsSentCount,dataNotFoundIds  1.How can i make table column arrang... See more...
Hi Team, I have a table which has counts for these attributes Re-ProcessRequest count,objectType,objectIdsCount,uniqObjectIdsCount,sqsSentCount,dataNotFoundIds  1.How can i make table column arrange as my needs,currently dataNotFoundIds shows in second coluld ,,,rather i want to display in last column.similary want to do for other columns too? 2.How can i filter based on the objecttype and do the addcolumntotal and gisplay total count? index="" source IN ""   "support request details" |stats count | rename count as Re-ProcessRequest | join left [ search index="" source IN ""  "input params" OR "sqs sent count" OR "Total messages published to SQS successfully" OR "unique objectIds" OR "data not found for Ids" | rex "\"objectType\":\"(?<objectType>[^\"]+)" | rex "\"objectIdsCount\":\"(?<objectIdsCount>[^\"]+)" | rex "\"uniqObjectIdsCount\":\"(?<uniqObjectIdsCount>[^\"]+)" | rex "\"sqsSentCount\":\"(?<sqsSentCount>[^\"]+)" | rex "\"dataNotFoundIds\":\"(?<dataNotFoundIds>[^\"]+)" | rex "\"totalMessagesPublishedToSQS\":\"(?<totalMessagesPublishedToSQS>[^\"]+)" | table objectType,objectIdsCount,sqsSentCount,totalMessagesPublishedToSQS,uniqObjectIdsCount,dataNotFoundIds | addcoltotals labelfield=total label="Total" | tail 1| stats list(*) as * ] | join [ search index=""source IN "" "dataNotFoundIds" | spath output=payload path=dataNotFoundIds{} | spath input=_raw | stats count by payload | addcoltotals labelfield=total label="Total" | tail 1 | fields - payload,total | rename count as datanotfound]
i want the sub query search result which is a list of tracking id in my main query in clause but none of them are working. subquery and main working individually but after combining both it's not wo... See more...
i want the sub query search result which is a list of tracking id in my main query in clause but none of them are working. subquery and main working individually but after combining both it's not working, i tried with 3 different option but none of the below are working 1. index="dockerlogs-silver" source="*gps-external-processor-prod*" "Handle 500 Server error" OR "Handle 4xx error" | where traceID IN ([search index="dockerlogs-silver" source="*gps-external-processor-prod*" "00012342231515417786" | stats values(traceID) as trackingID | eval trackingid="\"".mvjoin(trackingid,"\",\"")."\""]) 2. index="dockerlogs-silver" source="*gps-external-processor-prod*" "Handle 500 Server error" OR "Handle 4xx error" |where traceID IN ([search index="dockerlogs-silver" source="*gps-external-processor-prod*" "00012342231515417786" | stats values(traceID) as trackingid | table trackingid | stats values(eval("\"".trackingid."\"")) as search delim="," | nomv search]) 3. index="dockerlogs-silver" "Handle 500 Server error" OR "Handle 4xx error" |where traceID IN ([index="dockerlogs-silver" source="*gps-external-processor-prod*" "00012342231515417786" | stats values(traceID) | format]) |table traceID
Hello, When I clicked open in search, I got the following message: Request-URI Too Long The requested URL's length exceeds the capacity limit for this server. I don't get the message if I copy ... See more...
Hello, When I clicked open in search, I got the following message: Request-URI Too Long The requested URL's length exceeds the capacity limit for this server. I don't get the message if I copy and paste the search manually Why does Splunk send searches via GET request? How do I fix this without an admin role? Thank you for your help
Hi @richgalloway , Can we use general regex to extract all the fields having key value pair. I have installed a custom app on another SH, where we typically install all apps to distribute the loa... See more...
Hi @richgalloway , Can we use general regex to extract all the fields having key value pair. I have installed a custom app on another SH, where we typically install all apps to distribute the load, but it is not reflecting (the configurations are not working) on another SH in ES. What could be the reason? In the custom app, the object owner is not showing any user after installation. How can we change it under all configurations in the UI of SH, we doesn't have backend access. Thanks
First, thank you for giving a clear illustration of input, desired output, and the logic linking the two.  Let me confirm: Are you skipping Joe because IP address is not 1.1.1.1? Assuming this is co... See more...
First, thank you for giving a clear illustration of input, desired output, and the logic linking the two.  Let me confirm: Are you skipping Joe because IP address is not 1.1.1.1? Assuming this is correct, you are looking for something like   <some index search> transaction IN (Logged, DeleteTable) | stats list(transaction) as transaction min(_time) as logon_time max(_time) as delete_time values(userIP) as userIP by User | where mvindex(transaction, 0) == "Logged" AND mvindex(transaction, -1) LIKE "DeleteTable" AND delete_time < relative_time(logon_time, "+10min") AND userIP == "1.1.1.1" | fieldformat logon_time = strftime(logon_time, "%F %T") | fieldformat delete_time = strftime(delete_time, "%F %T")   Output from your sample data is User transaction logon_time delete_time userIP Alex Logged DeleteTable 2023-11-05 12:05:00 2023-11-05 12:10:00 1.1.1.1 Mike Logged DeleteTable 2023-11-05 12:02:00 2023-11-05 12:06:00 1.1.1.1 This is an emulation you can play with and compare with real data   | makeresults | eval _raw="# Time User Transaction 1 12:01 David Login from 1.1.1.1 2 12:01 Joe Login from 2.2.2.2 3 12:02 Mike Login from 1.1.1.1 4 12:03 David Something else 5 12:05 Alex Login from 1.1.1.1 6 12:06 Mike Something else 7 12:09 Joe Delete table 8 12:10 Alex Delete table 9 12:06 Mike Delete table 10 12:20 David Delete table" | multikv forceheader=1 | eval transaction = case(Transaction LIKE "Login from %", "Logged", Transaction == "Delete table", "DeleteTable", true(), "SomethingElse") | rex field=Transaction "Login from (?<userIP>.+)" | fields - _* linecount Transaction | eval _time = strptime(Time, "%H:%M") | search transaction IN (Logged, DeleteTable) ``` the above emulates <some index search> transaction IN (Logged, DeleteTable) ```  
As already stated, there is no deduplication at ingest or index time. Forwarders that load balance across multiple indexers send to one indexer until some criterium is met (time, volume, or indexer ... See more...
As already stated, there is no deduplication at ingest or index time. Forwarders that load balance across multiple indexers send to one indexer until some criterium is met (time, volume, or indexer is unavailable) and then sends to the next.  It will send the same event twice only if indexer acknowledgment is in effect and an ACK is not received.  When that happens, one or more events may be duplicated and that must be handled at search time.
I want to deal with big data uising Splunk. To reduce time for searching data, I want to select specific data from original data, pre-process it, and save the output data as csv format. Also I want ... See more...
I want to deal with big data uising Splunk. To reduce time for searching data, I want to select specific data from original data, pre-process it, and save the output data as csv format. Also I want to make dashboard using out data. Please let me know about example of query or helpful article.  
In this dataset, transactions (#3 + #9 + #10 - Mike), and (#5 + #7 +#11  - Alex) -- Would be displayed. # Time User Transaction 1 12:01 David Login from 1.1.1.1 2 12:01 Joe Login f... See more...
In this dataset, transactions (#3 + #9 + #10 - Mike), and (#5 + #7 +#11  - Alex) -- Would be displayed. # Time User Transaction 1 12:01 David Login from 1.1.1.1 2 12:01 Joe Login from 2.2.2.2 3 12:02 Mike Login from 1.1.1.1 4 12:03 David Something else 5 12:05 Alex Login from 1.1.1.1 6 12:06 Mike Something else 7 12:09 Alex Delete table 8 12:10 Joe Delete table 9 12:06 Mike Delete table 10 12:09 Mike Insert Table 11 12:14 Alex Insert Table 12 12:20 David Delete table Looking for one search to find all events where within 10 minutes: 1. User logged in from IP address 1.1.1.1 (Search:  userIP = "1.1.1.1"  transaction="Logged")  2. The same user then deleted a table (Search: databaseAction = "DeleteTable") 3. The same user then inserted a table (Search: databaseAction = "InsertTable")   I can use startswith and endswith with transaction, but this only gives me the first and last event, not the second.
interesting, i know there is a search time dedup command, is there no index time command?  Also, when a UF is configured to output to 2 HFs which are configured for autolb, does it actually send t... See more...
interesting, i know there is a search time dedup command, is there no index time command?  Also, when a UF is configured to output to 2 HFs which are configured for autolb, does it actually send the event to both HFs, or does it send to 1st available or whichever responds?  If it sends to both, how does the de-duplication happen?
Splunk does not deduplicate inputs.  You may be able to write a script or program to serve as an intermediary that receives data from two sources and removes duplicates.  Such a work-around would cou... See more...
Splunk does not deduplicate inputs.  You may be able to write a script or program to serve as an intermediary that receives data from two sources and removes duplicates.  Such a work-around would could be low-latency by forwarding the first instance of an event immediately.  It would tend to use a lot of memory, however, while it retains events for comparison with events from the other source.  Or it could consider only one source to be 'active' and ignore events from the other source until the first becomes non-responsive.  Either way, it's not a Splunk feature/capability.
I have a custom solution to forward cloudwatch logs events to splunk cloud.  It works great!  However, i am trying to use a pair of HF configured using fargate containers 4 instances of each.  I am t... See more...
I have a custom solution to forward cloudwatch logs events to splunk cloud.  It works great!  However, i am trying to use a pair of HF configured using fargate containers 4 instances of each.  I am treating them as 4 on the A side and 4 on the B side of an HA configuration.  Im trying to approximate the functionality of the UF > HF autolb in the outputs.conf, only in this case the UF is a Lambda function.  I tried sending events to one HF instance on both the A and B side, but i end up with duplicates for every event, which makes complete sense as there is no auto-dedup. What i want to do for now, is bring up a single HF that receives ALL traffic from all A side and B side HF instances.  I want to configure it to dedup all events and send the result to splunk cloud. Is this doable?  Would it create much latency? How would i configure that, inputs.conf, transforms, props?  (I have outputs covered) Thank You, Mike
Thanks for your reply.  Yes it scales how I want it to once I actually had a single value in the output.   
Thanks for your reply.  Yes, I had extra values in the output.  Once output was one single value it worked perfectly.  Thanks
Hi, not answering a question since you didn't ask one, but it seems like you're asking if you can set-up a user if you cannot reach the GUI on either the CM or DS in a distributed cluster? If so, ... See more...
Hi, not answering a question since you didn't ask one, but it seems like you're asking if you can set-up a user if you cannot reach the GUI on either the CM or DS in a distributed cluster? If so, you might start here:  https://community.splunk.com/t5/Security/How-to-create-a-user-from-the-command-line-and-require-password/m-p/410937 DOCS: https://docs.splunk.com/Documentation/Splunk/9.1.1/Security/ConfigureuserswiththeCLI  
1) eventstats adds the aggregated value (sum in this instance) to each event, stats replaces the events with the aggregated statisitcs 2) No, this is not normally possible - addtotals adds an extra ... See more...
1) eventstats adds the aggregated value (sum in this instance) to each event, stats replaces the events with the aggregated statisitcs 2) No, this is not normally possible - addtotals adds an extra event (row) to the pipeline at the end. The way the pipeline is displayed happens after the total row has been added and there is no way to predict how big the first page of the display is going to be ahead of time.
Hello, I tried your suggestion and it worked successfully. I accepted this as solution. I appreciate your help. Thank you. 1)    If I printed out "total" field, it can give total 140 in each row.... See more...
Hello, I tried your suggestion and it worked successfully. I accepted this as solution. I appreciate your help. Thank you. 1)    If I printed out "total" field, it can give total 140 in each row.    How did eventstats know how to calculate the total of 140, when "stats" command has scoreSum of 20/40/80?          If I played around and used "stats" or "eventstats group by vuln", it didn't work.  Please suggest          | eventstats sum(scoreSum) as total     vuln scoreSum total scoreSum_pct vulnA 20 140 14.3% vulnB 40 140 28.6% vulnC 80 140 57.1% Total_scoreSum 140 420 100% 2)   If there are hundreds of row, there will be multi pages. The "Total_scoreSum" field will appear at the end of the row.        Is there a way to display it on the first page, but at the bottom, not at the top (using sort)? | addcoltotals labelfield =vuln label=Total_scoreSum scoreSum scoreSum_pct Thank you so much
Why did you not include all of that in the OP?  You could have had a solution many hours sooner. index=myindex earliest=-24h ``` Count each hash value ``` | eventstats count by SHA256HashData ``` Fi... See more...
Why did you not include all of that in the OP?  You could have had a solution many hours sooner. index=myindex earliest=-24h ``` Count each hash value ``` | eventstats count by SHA256HashData ``` Find the hash value with the lowest count ``` | eventstats min(count) as minCount ``` Keep the hashes with the lowest count ``` | where count=minCount | collect index=summary
Thank you. That worked.
Use the rex command to extract the message. | rex "\*+(?<ERROR>[^\*]+)"  This regex takes everything between asterisks and puts it into the ERROR field.