All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I found out what the problem was. There is a Cribl server between UF and Indexer, which I mistakenly ruled out as the source of the problem during throubleshooting. I bypassed Cribl for a while and t... See more...
I found out what the problem was. There is a Cribl server between UF and Indexer, which I mistakenly ruled out as the source of the problem during throubleshooting. I bypassed Cribl for a while and the problem disappeared. The rest was already pretty fast. I found that there was a persistent queue enabled for Linux input/source in the "Alway On" mode. The persistent queue was not turned on for Windows Input/source. Windows logs were OK all the time. After turning it off for Linux data, the problem disappeared. I don't understand why the persistent queue behaves this way, but I don't have time to investigate further. Maybe it's a Cribl bug or a misunderstanding of functionality. The input queue is not required in the project, so I can leave it off. For me, it's currently resolved Thank you all for your help and your time
I have a search that returns values for dates and I want to calculate the changes between the dates.  What I want would look something like this. index 1-Aug 8-Aug Aug 8 change ... See more...
I have a search that returns values for dates and I want to calculate the changes between the dates.  What I want would look something like this. index 1-Aug 8-Aug Aug 8 change 15-Aug Aug 15 Change 22-Aug Aug 22 change 29-Aug Aug 29 change index1 5.76 5.528 96% 5.645 102% 7.666 136% 6.783 88% index2 0.017 0.023 135% 0.036 157% 0.033 92% 14.985 45409% index3 2.333 2.257 97% 2.301 102% 2.571 112% 0.971 38% index4 2.235 1.649 74% 2.01 122% 2.339 116% 2.336 100% index5 19.114 14.179 74% 14.174 100% 18.46 130% 19.948 108% I have a search that returns the values without the change calculations | loadjob savedsearch="me@email.com:splunk_instance_monitoring:30 Days Ingest By Index" | eval day_of_week=strftime(_time,"%a"), date=(strftime(_time,"%Y-%m-%d")) | search day_of_week=Tue | fields - _time day_of_week | transpose header_field=date | rename column AS index | sort index | addcoltotals label=Totals labelfield=index If the headers were something like "week 1" "week 2" I can get what I want, but with date headers that change very time, I've tried using foreach to iterate through and caclulate the changes from one column to the next but haven't been able to come up with the right solution.  Can anyone help?
Is there a way to see who modified system settings in Splunk Cloud?  For example we recently had an issue where an Splunk IP allow list was modified however we can not seem to find the activity in th... See more...
Is there a way to see who modified system settings in Splunk Cloud?  For example we recently had an issue where an Splunk IP allow list was modified however we can not seem to find the activity in the _internal or _audit indexes.  
Thanks @ITWhisperer for your suggestion.   I was able to do produce the requested data via this command.  
Try something like this | inputlookup fm4143_3d.csv | stats count(FLOW_ID) as total count(ERROR_MESSAGE) as fail | eval success = total - fail | eval success_rate = 100 * success/total | fields succ... See more...
Try something like this | inputlookup fm4143_3d.csv | stats count(FLOW_ID) as total count(ERROR_MESSAGE) as fail | eval success = total - fail | eval success_rate = 100 * success/total | fields success_rate
Hello. I have Splunk Enterprise (https://splunk6.****.net run from a browser) and am running a query collecting results and saving as a report (to get the output periodically i.e. summary indexing).... See more...
Hello. I have Splunk Enterprise (https://splunk6.****.net run from a browser) and am running a query collecting results and saving as a report (to get the output periodically i.e. summary indexing). How do I connect to my Postgres database installed on my PC to send/store this data? DB connect is not supported for my system (deprecated / sunset) Thanks
Hello All,  I'm having a task to measure the compliancy of Security solution onboarded on the SIEM, that means i have to regularly check if the solution is onboarded by checking if there is any log... See more...
Hello All,  I'm having a task to measure the compliancy of Security solution onboarded on the SIEM, that means i have to regularly check if the solution is onboarded by checking if there is any logs generating in a specific index,  For Example my search query will be : index=EDR | stats count | eval status=if((count > "0"),"Compliant","Not Compliant") | fields -count Results that i should have: status Compliant   I have a lookup table called compliance.csv and i need to update the status from "Not Compliant" to "Compliant".  Solution Status EDR Not Compliant DLP Not Compliant   how can i utilize outputlookup command to update the table not overwrite or append.     
Hi,   So, I got an issue where I have a log and the log has a field called ERROR_MESSAGES for each event that ends in an error. The other events that have a NULL value under ERROR_MESSAGES are su... See more...
Hi,   So, I got an issue where I have a log and the log has a field called ERROR_MESSAGES for each event that ends in an error. The other events that have a NULL value under ERROR_MESSAGES are successful events. So, I'm trying to get a percentage of the successful events over the total events. Ths is the query I built but when I run the search success rate comes back with no percentage value and I know there's 338/3190 successful events. Any help would go along way I've been struggling I feel like my SPL is getting better but man this one has me scratching my head. | inputlookup fm4143_3d.csv | stats count(FLOW_ID) as total | appendpipe [| inputlookup fm4143_3d.csv | where isnull(ERROR_MESSAGE) | stats count as success] | eval success_rate = ((success/total)*100) | fields success_rate  
Thank you Giuseppe, that was exactly what I was looking to achieve. 
Hello, Can you please extend on how option 2) Consider periodically reading the database in batch mode into a lookup file (or KV store). Each read would overwrite the existing lookup file so you'd o... See more...
Hello, Can you please extend on how option 2) Consider periodically reading the database in batch mode into a lookup file (or KV store). Each read would overwrite the existing lookup file so you'd only have the most recent data in Splunk. could be implemented?
+1
Hi @Declan123 , did you trid the transpose command? try something like this: source="transaction file.csv" Description="VDP-WWW.AMAZON" | rename "Debit Amount" AS DebitAmount | eval totalprime=De... See more...
Hi @Declan123 , did you trid the transpose command? try something like this: source="transaction file.csv" Description="VDP-WWW.AMAZON" | rename "Debit Amount" AS DebitAmount | eval totalprime=DebitAmount*59 | eval totalvalue=286*5.99 | stats sum(totalprime) AS totalprime sum(totalvalue) AS totalvalue | transpose you'll have something like this: Ciao. Giuseppe
Hi All, I am trying to calculate 2 values by multiplication and then compare these 2 values on a column/bar chart.  My query to calculate the 2 values is: source="transaction file.csv" Description... See more...
Hi All, I am trying to calculate 2 values by multiplication and then compare these 2 values on a column/bar chart.  My query to calculate the 2 values is: source="transaction file.csv" Description="VDP-WWW.AMAZON" | rename "Debit Amount" AS DebitAmount | eval totalprime=DebitAmount*59 | eval totalvalue=286*5.99   However I am having trouble displaying them on a chart to compare their values. I would ideally like them to both be on the X axis and have the Y axis as a generic 'total value' or similar just so I can easily see how one value compares against the other.  When I attempt to do this with a query like the below,  I have to select 1 field as X axis and 1 as Y axis which leads the chart being incorrect.  source="transaction file.csv" Description="VDP-WWW.AMAZON" | rename "Debit Amount" AS DebitAmount | eval totalprime=DebitAmount*59 | eval totalvalue=286*5.99 | chart sum(totalprime) as prime, sum(totalvalue) as value   I want totalvalue as a column and totalprime as another column, next to each other. To allow me to easily compare the total amount of each next to each other.  Can anyone help with this? Thanks.  
Taking on your changes, please explain your logic for excluding "z" in your expected results. Also, your example does not have any duplicates so it is unclear, from the expected results, how you want... See more...
Taking on your changes, please explain your logic for excluding "z" in your expected results. Also, your example does not have any duplicates so it is unclear, from the expected results, how you want duplicates treated. Having an accurate representation of your data might help clarify this. Assuming "z" was supposed to be in the results, then my previous solution still works - the mvexpand expands the multivalue field created by list() | makeresults format=csv data="Timestamp,ID,fieldA,fieldB 11115,1,,z 11245,1,a, 11378,1,b, 11768,1,,d 12550,2,c, 13580,2,,e 15703,2,,f 18690,3,,g" | stats latest(fieldA) as fieldA list(fieldB) as fieldB by ID | mvexpand fieldB  
Hi, I have an Elastic DB that receive logs from various services directly and I want to send these logs to Splunk Enterprise. Is there any documentation about install instruction of the Elasticse... See more...
Hi, I have an Elastic DB that receive logs from various services directly and I want to send these logs to Splunk Enterprise. Is there any documentation about install instruction of the Elasticsearch Data Integrator? I couldn't  config it to make it work and I don't find any documentation on how to install and configure this add-on. Please help me with that.@larmesto  Kind Regards, Mohammad
Hi,  I have a Splunk Heavy Forwarder routing data to a Splunk Indexer. I also have a search head configured that performs distributed search on my indexer. My Heavy forwarder has a forwarding l... See more...
Hi,  I have a Splunk Heavy Forwarder routing data to a Splunk Indexer. I also have a search head configured that performs distributed search on my indexer. My Heavy forwarder has a forwarding license, so it does not index the data. However, I still want to use props.conf and transforms.conf on my forwarder. These configs are: =============================================================== transforms.conf [extract_syslog_fields] DELIMS = "|" FIELDS = "datetime", "syslog_level", "syslog_source", "syslog_message" =============================================================== props.conf [router_syslog] TIME_FORMAT = %a %b %d %H:%M:%S %Y MAX_TIMESTAMP_LOOKAHEAD = 24 SHOULD_LINEMERGE = false LINE_BREAKER = ([\r\n]+) TRUNCATE = 10000 TRANSFORMS-extracted_fields = extract_syslog_fields So what I expected is that when I search the index on my search head, I would see the fields  "datetime", "syslog_level", "syslog_source", "syslog_message" . However, this does not occur. On the otherhand, if I configure field extractions on the search-head, this works just fine and my syslog data is split up into those fields. Am I misunderstanding how Transforms work ? Is the heavy forwarder incapable of splitting up my syslog into different fields based on a delimiter because it's not indexing the data ?  Any help or advice would be highly appreciated. Thank you so much!
Thank you for the feedback, I have updated the post 
Hi @MK3 , as also @ITWhisperer said, this isn't a valid SPL search, in addition, you should save your csv in a lookup and use the lookup command. In addition, you cannot put a field in a stats comm... See more...
Hi @MK3 , as also @ITWhisperer said, this isn't a valid SPL search, in addition, you should save your csv in a lookup and use the lookup command. In addition, you cannot put a field in a stats command without a function and you don't need the stats command. so try something like this: index="...*" "... events{}.name=ResourceCreated | bin _time span=1h | spath "events{}.tags.A" | rename "events{}.tags.A" AS "A" "events{}.tags.C" AS "C" | lookup Map.csv C OUTPUT D | table A B C D _time | collect index=_xyz_summary marker="search_name=test_new_query_4cols" Ciao. Giuseppe
Hi @avikc100 , why did you used xyseries after stats? please try: index="webmethods_prd" host="USPGH-WMA2AISP*" source="/apps/WebMethods/IntegrationServer/instances/default/logs/ExternalPAEU.log" ... See more...
Hi @avikc100 , why did you used xyseries after stats? please try: index="webmethods_prd" host="USPGH-WMA2AISP*" source="/apps/WebMethods/IntegrationServer/instances/default/logs/ExternalPAEU.log" ("success" OR "fail*") | eval status = if(searchmatch("success"), "Success", "Error") | stats count by source status | eval source=if(source="*PAEU.log", "Canada Pricing Call","XXX") Ciao. Giuseppe
Thanks, didn't realize we could do a by clause with tstats as well.