All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I test  <query>| makeresults | eval technique_id="$technique_id$" | where isnotnull(technique_id) | mitrepurplelab "$technique_id$"</query>   and I got :  2024-02-09 14:24:52,100 - DEBUG - Argume... See more...
I test  <query>| makeresults | eval technique_id="$technique_id$" | where isnotnull(technique_id) | mitrepurplelab "$technique_id$"</query>   and I got :  2024-02-09 14:24:52,100 - DEBUG - Arguments reçus: ['/opt/splunk/etc/apps/Ta-Purplelab/bin/mitrepurplelab.py', '__GETINFO__', '"T1059.003"'] 2024-02-09 14:24:52,100 - ERROR - Usage incorrect: python script.py <technique_id>   This time the Tehnique is well retreive but the syntax is not correct for the script I guess 
Your event data has only one = after request id, not two as you have used in your rex. Try this sourcetype=“my_source” [search sourcetype="my_source" "failed request, request id=" | rex “failed requ... See more...
Your event data has only one = after request id, not two as you have used in your rex. Try this sourcetype=“my_source” [search sourcetype="my_source" "failed request, request id=" | rex “failed request, request id=(?<request_id>[\w-]+)" | top limit=100 request_id | fields request_id]
There is nothing in this search that ensures you only have one event - you could have two events with exactly the same _time value - try something like this | sort 0 -_time | head 1
What did you get in the mitrepurplelab.log when you tried <query>| makeresults | eval technique_id="$technique_id$" | where isnotnull(technique_id) | mitrepurplelab "$technique_id$"</query> and <q... See more...
What did you get in the mitrepurplelab.log when you tried <query>| makeresults | eval technique_id="$technique_id$" | where isnotnull(technique_id) | mitrepurplelab "$technique_id$"</query> and <query>| makeresults | eval technique_id="$technique_id$" | where isnotnull(technique_id) | mitrepurplelab technique_id</query>
this is how sample event looks like 2024:02:09:13:47:07.078 ERROR boundedElastic-6362 c.v.v.h.UserErrorHandler -24jan-rre2655-5b684rfb9b-jcfd4 failed request, request id=0a1-0b2-0a3, error: Val... See more...
this is how sample event looks like 2024:02:09:13:47:07.078 ERROR boundedElastic-6362 c.v.v.h.UserErrorHandler -24jan-rre2655-5b684rfb9b-jcfd4 failed request, request id=0a1-0b2-0a3, error: ValidationException{message=\'Status: 915 User not part of registery. at least one seller did not have the same user id as the initial OT1 inquiry user, OT1 user=[user id=985238, seller id=134550], all merchants=[(user id=10, seller id=20), (user id=10, seller id=20), (user id=10, seller id=20), (user id=10, seller id=20)]\’} now my task is to extract that request id and extract the events of it
By "Search head" you mean something like an all-in-one Splunk install, right? Indexer, SH, etc... all in a single box? I think a quick outline of how this should work might help The HF with DB Con... See more...
By "Search head" you mean something like an all-in-one Splunk install, right? Indexer, SH, etc... all in a single box? I think a quick outline of how this should work might help The HF with DB Connect will have an outputs.conf set up that points to your Indexer/SH The Indexer/SH will have receiving turned on So when your HF "collects" data using DB Connect, it puts that data into an index that's on the indexer. So if I had to make a primary guess, it's that you didn't hook the HF up to the SH/IDX to send it's data there, so what it's doing is saving the data into an index _on the local HF_ instead of on your primary SH/Indexer. Some docs to read - I promise they're pretty good too!  But double check you've done all the steps in both - https://docs.splunk.com/Documentation/Splunk/9.2.0/Forwarding/Enableareceiver https://docs.splunk.com/Documentation/Splunk/9.2.0/Forwarding/Deployaheavyforwarder If that doesn't work, let us know what you did and what it's now doing.  Also you could check DB Connect's various "monitoring" pages to see what they say about what's happening.  Maybe you just have some other error or forgot to schedule the DB input or something.
@danielcj @ITWhisperer  In this instance, I am utilizing the SPL. The most recent event I am obtaining is from a table, where I encounter repeated values in the 'destination field which is 2.' ... See more...
@danielcj @ITWhisperer  In this instance, I am utilizing the SPL. The most recent event I am obtaining is from a table, where I encounter repeated values in the 'destination field which is 2.' index=foo sourcetype="foo" source="foo" | spath input=_raw output=analytics path="analytics{}" | rename "analytics{}.destination" as destination, "analytics{}.messages" as messages, "analytics{}.inflightMessages" as inflightMessages | sort 0 -_time | eventstats max(_time) as latestTime | where _time = latestTime | table destination, messages, inflightMessages
In a SmartStore configuration, there are a significant number of deletes/writes as buckets are evicted and copied to the indexer's volume.  To improve performance, SSD disks are being used.  In this ... See more...
In a SmartStore configuration, there are a significant number of deletes/writes as buckets are evicted and copied to the indexer's volume.  To improve performance, SSD disks are being used.  In this case, how often should one run the TRIM command to help with SSD garbage collection?
ok so when i'm adding this to commands.conf :  command.arg.1 = T1059.003   The script work well but with the argument is fixed, we don't want that   Yeah i set mitrepurplelab.log to have more in... See more...
ok so when i'm adding this to commands.conf :  command.arg.1 = T1059.003   The script work well but with the argument is fixed, we don't want that   Yeah i set mitrepurplelab.log to have more information and it is interesting because when I  do :  | mitrepurplelab T1059.003   I have :  2024-02-09 13:29:43,221 - DEBUG - Arguments reçus: ['/opt/splunk/etc/apps/Ta-Purplelab/bin/mitrepurplelab.py'] 2024-02-09 13:29:43,221 - ERROR - Usage incorrect: python script.py <technique_id> Like the T1059.003 was not pass    And when I launch the script by the dashboard I have the same output.  But when I remove chunked = true  and add  enableheader = true outputheader = true requires_srinfo = true supports_getinfo = true supports_multivalues = true supports_rawargs = true python.version = python3   To commands.conf I have this ouput :   2024-02-09 13:43:38,870 - DEBUG - Arguments reçus: ['/opt/splunk/etc/apps/Ta-Purplelab/bin/mitrepurplelab.py', '__GETINFO__', 'technique_id'] 2024-02-09 13:43:38,870 - ERROR - Usage incorrect: python script.py <technique_id>   We're getting close...   
I have the following SPL search.   index="cloudflare" | top ClientRequestPath by ClientRequestHost | eval percent = round(percent,2) | rename count as "Events", ClientRequestPath as "Path", percent... See more...
I have the following SPL search.   index="cloudflare" | top ClientRequestPath by ClientRequestHost | eval percent = round(percent,2) | rename count as "Events", ClientRequestPath as "Path", percent as "%"    Wich give me this result. I also need to group it by 10m time range and calculate the difference in percents between 2 previous time ranges for every line. Help me figure out how do that, thx.
I Am having Hf and it is configured to send data via sourcetype A After sometime it stops sending data to A Then i move the data to diffrent HF in sourcetype : test ( to test if it is working)  th... See more...
I Am having Hf and it is configured to send data via sourcetype A After sometime it stops sending data to A Then i move the data to diffrent HF in sourcetype : test ( to test if it is working)  then from new HF I am routing the data to Source type A itself Will it reingest the data or checkpoint from the data it is left off, will it ignore the data which was sent to sourcetype :test?? need help and clear explanation
Some of the events as returned by your subsearch sourcetype="my_source" "failed request, request id="
I am using below query to get the index sizes and consumed space and frozenTimePeriodInSecs details. | rest /services/data/indexes splunk_server="ABC" | stats min(minTime) as MINUTC max(maxTime) as... See more...
I am using below query to get the index sizes and consumed space and frozenTimePeriodInSecs details. | rest /services/data/indexes splunk_server="ABC" | stats min(minTime) as MINUTC max(maxTime) as MAXUTC max(totalEventCount) as MaxEvents max(currentDBSizeMB) as CurrentMB max(maxTotalDataSizeMB) as MaxMB max(frozenTimePeriodInSecs) as frozenTimePeriodInSecs by title | eval MBDiff=MaxMB-CurrentMB | eval MINTIME=strptime(MINUTC,"%FT%T%z") | eval MAXTIME=strptime(MAXUTC,"%FT%T%z") | eval MINUTC=strftime(MINTIME,"%F %T") | eval MAXUTC=strftime(MAXTIME,"%F %T") | eval DAYS_AGO=round((MAXTIME-MINTIME)/86400,2) | eval YRS_AGO=round(DAYS_AGO/365.2425,2) | eval frozenTimePeriodInDAYS=round(frozenTimePeriodInSecs/86400,2) | eval DAYS_LEFT=frozenTimePeriodInDAYS-DAYS_AGO | rename frozenTimePeriodInDAYS as frznTimeDAYS | table title MINUTC MAXUTC frznTimeDAYS DAYS_LEFT DAYS_AGO YRS_AGO MaxEvents CurrentMB MaxMB MBDiff title MINUTC MAXUTC frznTimeDAYS DAYS_LEFT DAYS_AGO YRS_AGO MaxEvents CurrentMB MaxMB MBDiff XYZ 24-06-2018 01:24 10-02-2024 21:11 62 -1995.87 2057.87 5.63 13115066 6463 8192 1729   For index 'XYZ' I can see frozenTimePeriod are showing 62 days so as per the set condition it should just show last 2 months of data but my MINTIME is still showing very old date as '24-06-2018 01:24'. When I checked the event counts in Splunk for older than 62 days then it shows very few counts compare to past 62 days events counts. (Current events counts are very high) So why still these older events are showing in Splunk and also why very few not all). I want to understand this scenario to increase the frozentime period.
1) already answered - standard pie charts don't have this feature in Splunk 2) responded in thread 3) edit your dashboard panel and change the x-axis title to none  
It looks like you have found 2 events in your search not 1, but your screenshot doesn't show how many events were returned. You could also look in your search log to see what is happening.
sorry, I did not get, what should I share?
Consider installing a second copy of the connector.  You will need to put it in a different directory and will have to edit app.conf, but it should allow for a second connection to DUO.  Installing a... See more...
Consider installing a second copy of the connector.  You will need to put it in a different directory and will have to edit app.conf, but it should allow for a second connection to DUO.  Installing a second copy of an app should be done from the CLI.
What does your mitrepurplelab.log show as being passed in argv? logging.basicConfig(filename='mitrepurplelab.log', level=logging.DEBUG, format='%(asctime)s - %(levelname)s - %(message)s') def main()... See more...
What does your mitrepurplelab.log show as being passed in argv? logging.basicConfig(filename='mitrepurplelab.log', level=logging.DEBUG, format='%(asctime)s - %(levelname)s - %(message)s') def main(): logging.debug(f "Arguments received: {sys.argv}")
Hi @vijreddy30, what do you mean with Splunk DB Logs, are you meaning queries on a DB done with DB-Connect or what else? If you have the data from a DB in a Splunk index, you can search them as usu... See more...
Hi @vijreddy30, what do you mean with Splunk DB Logs, are you meaning queries on a DB done with DB-Connect or what else? If you have the data from a DB in a Splunk index, you can search them as usual. Ciao. Giuseppe
Hi team , 1 .How to monitor the Splunk DB logs, already installed and configured. 2. How to 2 queries of splunk db ingest into single index?