All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

By "Search head" you mean something like an all-in-one Splunk install, right? Indexer, SH, etc... all in a single box? I think a quick outline of how this should work might help The HF with DB Con... See more...
By "Search head" you mean something like an all-in-one Splunk install, right? Indexer, SH, etc... all in a single box? I think a quick outline of how this should work might help The HF with DB Connect will have an outputs.conf set up that points to your Indexer/SH The Indexer/SH will have receiving turned on So when your HF "collects" data using DB Connect, it puts that data into an index that's on the indexer. So if I had to make a primary guess, it's that you didn't hook the HF up to the SH/IDX to send it's data there, so what it's doing is saving the data into an index _on the local HF_ instead of on your primary SH/Indexer. Some docs to read - I promise they're pretty good too!  But double check you've done all the steps in both - https://docs.splunk.com/Documentation/Splunk/9.2.0/Forwarding/Enableareceiver https://docs.splunk.com/Documentation/Splunk/9.2.0/Forwarding/Deployaheavyforwarder If that doesn't work, let us know what you did and what it's now doing.  Also you could check DB Connect's various "monitoring" pages to see what they say about what's happening.  Maybe you just have some other error or forgot to schedule the DB input or something.
@danielcj @ITWhisperer  In this instance, I am utilizing the SPL. The most recent event I am obtaining is from a table, where I encounter repeated values in the 'destination field which is 2.' ... See more...
@danielcj @ITWhisperer  In this instance, I am utilizing the SPL. The most recent event I am obtaining is from a table, where I encounter repeated values in the 'destination field which is 2.' index=foo sourcetype="foo" source="foo" | spath input=_raw output=analytics path="analytics{}" | rename "analytics{}.destination" as destination, "analytics{}.messages" as messages, "analytics{}.inflightMessages" as inflightMessages | sort 0 -_time | eventstats max(_time) as latestTime | where _time = latestTime | table destination, messages, inflightMessages
In a SmartStore configuration, there are a significant number of deletes/writes as buckets are evicted and copied to the indexer's volume.  To improve performance, SSD disks are being used.  In this ... See more...
In a SmartStore configuration, there are a significant number of deletes/writes as buckets are evicted and copied to the indexer's volume.  To improve performance, SSD disks are being used.  In this case, how often should one run the TRIM command to help with SSD garbage collection?
ok so when i'm adding this to commands.conf :  command.arg.1 = T1059.003   The script work well but with the argument is fixed, we don't want that   Yeah i set mitrepurplelab.log to have more in... See more...
ok so when i'm adding this to commands.conf :  command.arg.1 = T1059.003   The script work well but with the argument is fixed, we don't want that   Yeah i set mitrepurplelab.log to have more information and it is interesting because when I  do :  | mitrepurplelab T1059.003   I have :  2024-02-09 13:29:43,221 - DEBUG - Arguments reçus: ['/opt/splunk/etc/apps/Ta-Purplelab/bin/mitrepurplelab.py'] 2024-02-09 13:29:43,221 - ERROR - Usage incorrect: python script.py <technique_id> Like the T1059.003 was not pass    And when I launch the script by the dashboard I have the same output.  But when I remove chunked = true  and add  enableheader = true outputheader = true requires_srinfo = true supports_getinfo = true supports_multivalues = true supports_rawargs = true python.version = python3   To commands.conf I have this ouput :   2024-02-09 13:43:38,870 - DEBUG - Arguments reçus: ['/opt/splunk/etc/apps/Ta-Purplelab/bin/mitrepurplelab.py', '__GETINFO__', 'technique_id'] 2024-02-09 13:43:38,870 - ERROR - Usage incorrect: python script.py <technique_id>   We're getting close...   
I have the following SPL search.   index="cloudflare" | top ClientRequestPath by ClientRequestHost | eval percent = round(percent,2) | rename count as "Events", ClientRequestPath as "Path", percent... See more...
I have the following SPL search.   index="cloudflare" | top ClientRequestPath by ClientRequestHost | eval percent = round(percent,2) | rename count as "Events", ClientRequestPath as "Path", percent as "%"    Wich give me this result. I also need to group it by 10m time range and calculate the difference in percents between 2 previous time ranges for every line. Help me figure out how do that, thx.
I Am having Hf and it is configured to send data via sourcetype A After sometime it stops sending data to A Then i move the data to diffrent HF in sourcetype : test ( to test if it is working)  th... See more...
I Am having Hf and it is configured to send data via sourcetype A After sometime it stops sending data to A Then i move the data to diffrent HF in sourcetype : test ( to test if it is working)  then from new HF I am routing the data to Source type A itself Will it reingest the data or checkpoint from the data it is left off, will it ignore the data which was sent to sourcetype :test?? need help and clear explanation
Some of the events as returned by your subsearch sourcetype="my_source" "failed request, request id="
I am using below query to get the index sizes and consumed space and frozenTimePeriodInSecs details. | rest /services/data/indexes splunk_server="ABC" | stats min(minTime) as MINUTC max(maxTime) as... See more...
I am using below query to get the index sizes and consumed space and frozenTimePeriodInSecs details. | rest /services/data/indexes splunk_server="ABC" | stats min(minTime) as MINUTC max(maxTime) as MAXUTC max(totalEventCount) as MaxEvents max(currentDBSizeMB) as CurrentMB max(maxTotalDataSizeMB) as MaxMB max(frozenTimePeriodInSecs) as frozenTimePeriodInSecs by title | eval MBDiff=MaxMB-CurrentMB | eval MINTIME=strptime(MINUTC,"%FT%T%z") | eval MAXTIME=strptime(MAXUTC,"%FT%T%z") | eval MINUTC=strftime(MINTIME,"%F %T") | eval MAXUTC=strftime(MAXTIME,"%F %T") | eval DAYS_AGO=round((MAXTIME-MINTIME)/86400,2) | eval YRS_AGO=round(DAYS_AGO/365.2425,2) | eval frozenTimePeriodInDAYS=round(frozenTimePeriodInSecs/86400,2) | eval DAYS_LEFT=frozenTimePeriodInDAYS-DAYS_AGO | rename frozenTimePeriodInDAYS as frznTimeDAYS | table title MINUTC MAXUTC frznTimeDAYS DAYS_LEFT DAYS_AGO YRS_AGO MaxEvents CurrentMB MaxMB MBDiff title MINUTC MAXUTC frznTimeDAYS DAYS_LEFT DAYS_AGO YRS_AGO MaxEvents CurrentMB MaxMB MBDiff XYZ 24-06-2018 01:24 10-02-2024 21:11 62 -1995.87 2057.87 5.63 13115066 6463 8192 1729   For index 'XYZ' I can see frozenTimePeriod are showing 62 days so as per the set condition it should just show last 2 months of data but my MINTIME is still showing very old date as '24-06-2018 01:24'. When I checked the event counts in Splunk for older than 62 days then it shows very few counts compare to past 62 days events counts. (Current events counts are very high) So why still these older events are showing in Splunk and also why very few not all). I want to understand this scenario to increase the frozentime period.
1) already answered - standard pie charts don't have this feature in Splunk 2) responded in thread 3) edit your dashboard panel and change the x-axis title to none  
It looks like you have found 2 events in your search not 1, but your screenshot doesn't show how many events were returned. You could also look in your search log to see what is happening.
sorry, I did not get, what should I share?
Consider installing a second copy of the connector.  You will need to put it in a different directory and will have to edit app.conf, but it should allow for a second connection to DUO.  Installing a... See more...
Consider installing a second copy of the connector.  You will need to put it in a different directory and will have to edit app.conf, but it should allow for a second connection to DUO.  Installing a second copy of an app should be done from the CLI.
What does your mitrepurplelab.log show as being passed in argv? logging.basicConfig(filename='mitrepurplelab.log', level=logging.DEBUG, format='%(asctime)s - %(levelname)s - %(message)s') def main()... See more...
What does your mitrepurplelab.log show as being passed in argv? logging.basicConfig(filename='mitrepurplelab.log', level=logging.DEBUG, format='%(asctime)s - %(levelname)s - %(message)s') def main(): logging.debug(f "Arguments received: {sys.argv}")
Hi @vijreddy30, what do you mean with Splunk DB Logs, are you meaning queries on a DB done with DB-Connect or what else? If you have the data from a DB in a Splunk index, you can search them as usu... See more...
Hi @vijreddy30, what do you mean with Splunk DB Logs, are you meaning queries on a DB done with DB-Connect or what else? If you have the data from a DB in a Splunk index, you can search them as usual. Ciao. Giuseppe
Hi team , 1 .How to monitor the Splunk DB logs, already installed and configured. 2. How to 2 queries of splunk db ingest into single index?  
It's not that because in the search log we saw that the $technique_id$ is well pass (T1059.003)   02-09-2024 10:37:46.161 INFO SearchParser [10449 searchOrchestrator] - PARSING: | makeresults | ev... See more...
It's not that because in the search log we saw that the $technique_id$ is well pass (T1059.003)   02-09-2024 10:37:46.161 INFO SearchParser [10449 searchOrchestrator] - PARSING: | makeresults | eval technique_id="T1059.003" | where isnotnull(technique_id) | mitrepurplelab T1059.003    And even when i'm doing this command, I have the same issue :    | mitrepurplelab T1059.003     I think the issue is with the commands.conf When i put command.arg.1 = $technique_id$ on the commands.conf the script try to run with $technique_id$ as an argument but literraly $technique_id$ not 1059.003 so It doesn't work   
1) yes, this is the first approach i have take, later i posted in the community. But why its not showing the value count over chart. 2) by the way @ITWhisperer  if you have any idea please help me... See more...
1) yes, this is the first approach i have take, later i posted in the community. But why its not showing the value count over chart. 2) by the way @ITWhisperer  if you have any idea please help me for this https://community.splunk.com/t5/All-Apps-and-Add-ons/JSON-data-unexpected-value-count/m-p/677019#M80209 3)is it possible to remove the label name below colored where like "Mon Jan 15" to "Jan 15" like this. from UI or XML source or SPL  
As you can see here, there are no configuration options for this feature
@ITWhisperer  i am expexting the same in the attached picture 
It is not possible to tell whether your subsearch should work or not since, despite being asked before, you have not shared your events (anonymised of course). If you want further assistance, please ... See more...
It is not possible to tell whether your subsearch should work or not since, despite being asked before, you have not shared your events (anonymised of course). If you want further assistance, please share some sample events preferably in a code block </> to prevent loss of vital information.