All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

You could try something along these lines | makeresults format=csv data="index,1-Aug,8-Aug,15-Aug,22-Aug,29-Aug index1,5.76,5.528,5.645,7.666,6.783 index2,0.017,0.023,0.036,0.033,14.985 index3,2.333... See more...
You could try something along these lines | makeresults format=csv data="index,1-Aug,8-Aug,15-Aug,22-Aug,29-Aug index1,5.76,5.528,5.645,7.666,6.783 index2,0.017,0.023,0.036,0.033,14.985 index3,2.333,2.257,2.301,2.571,0.971 index4,2.235,1.649,2.01,2.339,2.336 index5,19.114,14.179,14.174,18.46,19.948" ``` the lines above simulate your data (without the calculations) ``` | untable index date size | eval date=strptime(date."-2024","%d-%b-%Y") | fieldformat date=strftime(date,"%F") | sort 0 index date | streamstats last(size) as previous window=1 global=f current=f by index | eval relative_size = 100 * size / previous | fields - previous | appendpipe [| eval date=strftime(date, "%F")." change" | xyseries index date relative_size] | appendpipe [| eval date=strftime(date, "%F") | xyseries index date size] | fields - date size relative_size | stats values(*) as * by index
Hi @ferdousfahim , I usually use this transformations at search time, but to apply them on Forwarders, you have to use INDEXED_EXTRACTIONS=CSV in props.conf, for more infos see at https://docs.splun... See more...
Hi @ferdousfahim , I usually use this transformations at search time, but to apply them on Forwarders, you have to use INDEXED_EXTRACTIONS=CSV in props.conf, for more infos see at https://docs.splunk.com/Documentation/Splunk/9.3.0/Data/Extractfieldsfromfileswithstructureddata Ciao. Giuseppe
Hi @MoeTaher , please try something like this: index=EDR | stats count | eval Status=if((count > "0"),"Compliant","Not Compliant"), Solution="EDR" | fields -count | appemd [ | inputlookup complianc... See more...
Hi @MoeTaher , please try something like this: index=EDR | stats count | eval Status=if((count > "0"),"Compliant","Not Compliant"), Solution="EDR" | fields -count | appemd [ | inputlookup compliance.csv | fields Solution Status ] | stats first(Status) AS Status BY Solution | outputlookup compliance.csv Ciao. Giuseppe
Thanks, it worked! All I have to do is convert it to a percentage and we're all good to go. I'll pass along the karma.
Hi @MK3 , good for you, see next time! Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated by all the contributors
I found out what the problem was. There is a Cribl server between UF and Indexer, which I mistakenly ruled out as the source of the problem during throubleshooting. I bypassed Cribl for a while and t... See more...
I found out what the problem was. There is a Cribl server between UF and Indexer, which I mistakenly ruled out as the source of the problem during throubleshooting. I bypassed Cribl for a while and the problem disappeared. The rest was already pretty fast. I found that there was a persistent queue enabled for Linux input/source in the "Alway On" mode. The persistent queue was not turned on for Windows Input/source. Windows logs were OK all the time. After turning it off for Linux data, the problem disappeared. I don't understand why the persistent queue behaves this way, but I don't have time to investigate further. Maybe it's a Cribl bug or a misunderstanding of functionality. The input queue is not required in the project, so I can leave it off. For me, it's currently resolved Thank you all for your help and your time
I have a search that returns values for dates and I want to calculate the changes between the dates.  What I want would look something like this. index 1-Aug 8-Aug Aug 8 change ... See more...
I have a search that returns values for dates and I want to calculate the changes between the dates.  What I want would look something like this. index 1-Aug 8-Aug Aug 8 change 15-Aug Aug 15 Change 22-Aug Aug 22 change 29-Aug Aug 29 change index1 5.76 5.528 96% 5.645 102% 7.666 136% 6.783 88% index2 0.017 0.023 135% 0.036 157% 0.033 92% 14.985 45409% index3 2.333 2.257 97% 2.301 102% 2.571 112% 0.971 38% index4 2.235 1.649 74% 2.01 122% 2.339 116% 2.336 100% index5 19.114 14.179 74% 14.174 100% 18.46 130% 19.948 108% I have a search that returns the values without the change calculations | loadjob savedsearch="me@email.com:splunk_instance_monitoring:30 Days Ingest By Index" | eval day_of_week=strftime(_time,"%a"), date=(strftime(_time,"%Y-%m-%d")) | search day_of_week=Tue | fields - _time day_of_week | transpose header_field=date | rename column AS index | sort index | addcoltotals label=Totals labelfield=index If the headers were something like "week 1" "week 2" I can get what I want, but with date headers that change very time, I've tried using foreach to iterate through and caclulate the changes from one column to the next but haven't been able to come up with the right solution.  Can anyone help?
Is there a way to see who modified system settings in Splunk Cloud?  For example we recently had an issue where an Splunk IP allow list was modified however we can not seem to find the activity in th... See more...
Is there a way to see who modified system settings in Splunk Cloud?  For example we recently had an issue where an Splunk IP allow list was modified however we can not seem to find the activity in the _internal or _audit indexes.  
Thanks @ITWhisperer for your suggestion.   I was able to do produce the requested data via this command.  
Try something like this | inputlookup fm4143_3d.csv | stats count(FLOW_ID) as total count(ERROR_MESSAGE) as fail | eval success = total - fail | eval success_rate = 100 * success/total | fields succ... See more...
Try something like this | inputlookup fm4143_3d.csv | stats count(FLOW_ID) as total count(ERROR_MESSAGE) as fail | eval success = total - fail | eval success_rate = 100 * success/total | fields success_rate
Hello. I have Splunk Enterprise (https://splunk6.****.net run from a browser) and am running a query collecting results and saving as a report (to get the output periodically i.e. summary indexing).... See more...
Hello. I have Splunk Enterprise (https://splunk6.****.net run from a browser) and am running a query collecting results and saving as a report (to get the output periodically i.e. summary indexing). How do I connect to my Postgres database installed on my PC to send/store this data? DB connect is not supported for my system (deprecated / sunset) Thanks
Hello All,  I'm having a task to measure the compliancy of Security solution onboarded on the SIEM, that means i have to regularly check if the solution is onboarded by checking if there is any log... See more...
Hello All,  I'm having a task to measure the compliancy of Security solution onboarded on the SIEM, that means i have to regularly check if the solution is onboarded by checking if there is any logs generating in a specific index,  For Example my search query will be : index=EDR | stats count | eval status=if((count > "0"),"Compliant","Not Compliant") | fields -count Results that i should have: status Compliant   I have a lookup table called compliance.csv and i need to update the status from "Not Compliant" to "Compliant".  Solution Status EDR Not Compliant DLP Not Compliant   how can i utilize outputlookup command to update the table not overwrite or append.     
Hi,   So, I got an issue where I have a log and the log has a field called ERROR_MESSAGES for each event that ends in an error. The other events that have a NULL value under ERROR_MESSAGES are su... See more...
Hi,   So, I got an issue where I have a log and the log has a field called ERROR_MESSAGES for each event that ends in an error. The other events that have a NULL value under ERROR_MESSAGES are successful events. So, I'm trying to get a percentage of the successful events over the total events. Ths is the query I built but when I run the search success rate comes back with no percentage value and I know there's 338/3190 successful events. Any help would go along way I've been struggling I feel like my SPL is getting better but man this one has me scratching my head. | inputlookup fm4143_3d.csv | stats count(FLOW_ID) as total | appendpipe [| inputlookup fm4143_3d.csv | where isnull(ERROR_MESSAGE) | stats count as success] | eval success_rate = ((success/total)*100) | fields success_rate  
Thank you Giuseppe, that was exactly what I was looking to achieve. 
Hello, Can you please extend on how option 2) Consider periodically reading the database in batch mode into a lookup file (or KV store). Each read would overwrite the existing lookup file so you'd o... See more...
Hello, Can you please extend on how option 2) Consider periodically reading the database in batch mode into a lookup file (or KV store). Each read would overwrite the existing lookup file so you'd only have the most recent data in Splunk. could be implemented?
+1
Hi @Declan123 , did you trid the transpose command? try something like this: source="transaction file.csv" Description="VDP-WWW.AMAZON" | rename "Debit Amount" AS DebitAmount | eval totalprime=De... See more...
Hi @Declan123 , did you trid the transpose command? try something like this: source="transaction file.csv" Description="VDP-WWW.AMAZON" | rename "Debit Amount" AS DebitAmount | eval totalprime=DebitAmount*59 | eval totalvalue=286*5.99 | stats sum(totalprime) AS totalprime sum(totalvalue) AS totalvalue | transpose you'll have something like this: Ciao. Giuseppe
Hi All, I am trying to calculate 2 values by multiplication and then compare these 2 values on a column/bar chart.  My query to calculate the 2 values is: source="transaction file.csv" Description... See more...
Hi All, I am trying to calculate 2 values by multiplication and then compare these 2 values on a column/bar chart.  My query to calculate the 2 values is: source="transaction file.csv" Description="VDP-WWW.AMAZON" | rename "Debit Amount" AS DebitAmount | eval totalprime=DebitAmount*59 | eval totalvalue=286*5.99   However I am having trouble displaying them on a chart to compare their values. I would ideally like them to both be on the X axis and have the Y axis as a generic 'total value' or similar just so I can easily see how one value compares against the other.  When I attempt to do this with a query like the below,  I have to select 1 field as X axis and 1 as Y axis which leads the chart being incorrect.  source="transaction file.csv" Description="VDP-WWW.AMAZON" | rename "Debit Amount" AS DebitAmount | eval totalprime=DebitAmount*59 | eval totalvalue=286*5.99 | chart sum(totalprime) as prime, sum(totalvalue) as value   I want totalvalue as a column and totalprime as another column, next to each other. To allow me to easily compare the total amount of each next to each other.  Can anyone help with this? Thanks.  
Taking on your changes, please explain your logic for excluding "z" in your expected results. Also, your example does not have any duplicates so it is unclear, from the expected results, how you want... See more...
Taking on your changes, please explain your logic for excluding "z" in your expected results. Also, your example does not have any duplicates so it is unclear, from the expected results, how you want duplicates treated. Having an accurate representation of your data might help clarify this. Assuming "z" was supposed to be in the results, then my previous solution still works - the mvexpand expands the multivalue field created by list() | makeresults format=csv data="Timestamp,ID,fieldA,fieldB 11115,1,,z 11245,1,a, 11378,1,b, 11768,1,,d 12550,2,c, 13580,2,,e 15703,2,,f 18690,3,,g" | stats latest(fieldA) as fieldA list(fieldB) as fieldB by ID | mvexpand fieldB  
Hi, I have an Elastic DB that receive logs from various services directly and I want to send these logs to Splunk Enterprise. Is there any documentation about install instruction of the Elasticse... See more...
Hi, I have an Elastic DB that receive logs from various services directly and I want to send these logs to Splunk Enterprise. Is there any documentation about install instruction of the Elasticsearch Data Integrator? I couldn't  config it to make it work and I don't find any documentation on how to install and configure this add-on. Please help me with that.@larmesto  Kind Regards, Mohammad