All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Try something like this | inputlookup fm4143_3d.csv | stats count(FLOW_ID) as total count(ERROR_MESSAGE) as fail | eval success = total - fail | eval success_rate = 100 * success/total | fields succ... See more...
Try something like this | inputlookup fm4143_3d.csv | stats count(FLOW_ID) as total count(ERROR_MESSAGE) as fail | eval success = total - fail | eval success_rate = 100 * success/total | fields success_rate
Hello. I have Splunk Enterprise (https://splunk6.****.net run from a browser) and am running a query collecting results and saving as a report (to get the output periodically i.e. summary indexing).... See more...
Hello. I have Splunk Enterprise (https://splunk6.****.net run from a browser) and am running a query collecting results and saving as a report (to get the output periodically i.e. summary indexing). How do I connect to my Postgres database installed on my PC to send/store this data? DB connect is not supported for my system (deprecated / sunset) Thanks
Hello All,  I'm having a task to measure the compliancy of Security solution onboarded on the SIEM, that means i have to regularly check if the solution is onboarded by checking if there is any log... See more...
Hello All,  I'm having a task to measure the compliancy of Security solution onboarded on the SIEM, that means i have to regularly check if the solution is onboarded by checking if there is any logs generating in a specific index,  For Example my search query will be : index=EDR | stats count | eval status=if((count > "0"),"Compliant","Not Compliant") | fields -count Results that i should have: status Compliant   I have a lookup table called compliance.csv and i need to update the status from "Not Compliant" to "Compliant".  Solution Status EDR Not Compliant DLP Not Compliant   how can i utilize outputlookup command to update the table not overwrite or append.     
Hi,   So, I got an issue where I have a log and the log has a field called ERROR_MESSAGES for each event that ends in an error. The other events that have a NULL value under ERROR_MESSAGES are su... See more...
Hi,   So, I got an issue where I have a log and the log has a field called ERROR_MESSAGES for each event that ends in an error. The other events that have a NULL value under ERROR_MESSAGES are successful events. So, I'm trying to get a percentage of the successful events over the total events. Ths is the query I built but when I run the search success rate comes back with no percentage value and I know there's 338/3190 successful events. Any help would go along way I've been struggling I feel like my SPL is getting better but man this one has me scratching my head. | inputlookup fm4143_3d.csv | stats count(FLOW_ID) as total | appendpipe [| inputlookup fm4143_3d.csv | where isnull(ERROR_MESSAGE) | stats count as success] | eval success_rate = ((success/total)*100) | fields success_rate  
Thank you Giuseppe, that was exactly what I was looking to achieve. 
Hello, Can you please extend on how option 2) Consider periodically reading the database in batch mode into a lookup file (or KV store). Each read would overwrite the existing lookup file so you'd o... See more...
Hello, Can you please extend on how option 2) Consider periodically reading the database in batch mode into a lookup file (or KV store). Each read would overwrite the existing lookup file so you'd only have the most recent data in Splunk. could be implemented?
+1
Hi @Declan123 , did you trid the transpose command? try something like this: source="transaction file.csv" Description="VDP-WWW.AMAZON" | rename "Debit Amount" AS DebitAmount | eval totalprime=De... See more...
Hi @Declan123 , did you trid the transpose command? try something like this: source="transaction file.csv" Description="VDP-WWW.AMAZON" | rename "Debit Amount" AS DebitAmount | eval totalprime=DebitAmount*59 | eval totalvalue=286*5.99 | stats sum(totalprime) AS totalprime sum(totalvalue) AS totalvalue | transpose you'll have something like this: Ciao. Giuseppe
Hi All, I am trying to calculate 2 values by multiplication and then compare these 2 values on a column/bar chart.  My query to calculate the 2 values is: source="transaction file.csv" Description... See more...
Hi All, I am trying to calculate 2 values by multiplication and then compare these 2 values on a column/bar chart.  My query to calculate the 2 values is: source="transaction file.csv" Description="VDP-WWW.AMAZON" | rename "Debit Amount" AS DebitAmount | eval totalprime=DebitAmount*59 | eval totalvalue=286*5.99   However I am having trouble displaying them on a chart to compare their values. I would ideally like them to both be on the X axis and have the Y axis as a generic 'total value' or similar just so I can easily see how one value compares against the other.  When I attempt to do this with a query like the below,  I have to select 1 field as X axis and 1 as Y axis which leads the chart being incorrect.  source="transaction file.csv" Description="VDP-WWW.AMAZON" | rename "Debit Amount" AS DebitAmount | eval totalprime=DebitAmount*59 | eval totalvalue=286*5.99 | chart sum(totalprime) as prime, sum(totalvalue) as value   I want totalvalue as a column and totalprime as another column, next to each other. To allow me to easily compare the total amount of each next to each other.  Can anyone help with this? Thanks.  
Taking on your changes, please explain your logic for excluding "z" in your expected results. Also, your example does not have any duplicates so it is unclear, from the expected results, how you want... See more...
Taking on your changes, please explain your logic for excluding "z" in your expected results. Also, your example does not have any duplicates so it is unclear, from the expected results, how you want duplicates treated. Having an accurate representation of your data might help clarify this. Assuming "z" was supposed to be in the results, then my previous solution still works - the mvexpand expands the multivalue field created by list() | makeresults format=csv data="Timestamp,ID,fieldA,fieldB 11115,1,,z 11245,1,a, 11378,1,b, 11768,1,,d 12550,2,c, 13580,2,,e 15703,2,,f 18690,3,,g" | stats latest(fieldA) as fieldA list(fieldB) as fieldB by ID | mvexpand fieldB  
Hi, I have an Elastic DB that receive logs from various services directly and I want to send these logs to Splunk Enterprise. Is there any documentation about install instruction of the Elasticse... See more...
Hi, I have an Elastic DB that receive logs from various services directly and I want to send these logs to Splunk Enterprise. Is there any documentation about install instruction of the Elasticsearch Data Integrator? I couldn't  config it to make it work and I don't find any documentation on how to install and configure this add-on. Please help me with that.@larmesto  Kind Regards, Mohammad
Hi,  I have a Splunk Heavy Forwarder routing data to a Splunk Indexer. I also have a search head configured that performs distributed search on my indexer. My Heavy forwarder has a forwarding l... See more...
Hi,  I have a Splunk Heavy Forwarder routing data to a Splunk Indexer. I also have a search head configured that performs distributed search on my indexer. My Heavy forwarder has a forwarding license, so it does not index the data. However, I still want to use props.conf and transforms.conf on my forwarder. These configs are: =============================================================== transforms.conf [extract_syslog_fields] DELIMS = "|" FIELDS = "datetime", "syslog_level", "syslog_source", "syslog_message" =============================================================== props.conf [router_syslog] TIME_FORMAT = %a %b %d %H:%M:%S %Y MAX_TIMESTAMP_LOOKAHEAD = 24 SHOULD_LINEMERGE = false LINE_BREAKER = ([\r\n]+) TRUNCATE = 10000 TRANSFORMS-extracted_fields = extract_syslog_fields So what I expected is that when I search the index on my search head, I would see the fields  "datetime", "syslog_level", "syslog_source", "syslog_message" . However, this does not occur. On the otherhand, if I configure field extractions on the search-head, this works just fine and my syslog data is split up into those fields. Am I misunderstanding how Transforms work ? Is the heavy forwarder incapable of splitting up my syslog into different fields based on a delimiter because it's not indexing the data ?  Any help or advice would be highly appreciated. Thank you so much!
Thank you for the feedback, I have updated the post 
Hi @MK3 , as also @ITWhisperer said, this isn't a valid SPL search, in addition, you should save your csv in a lookup and use the lookup command. In addition, you cannot put a field in a stats comm... See more...
Hi @MK3 , as also @ITWhisperer said, this isn't a valid SPL search, in addition, you should save your csv in a lookup and use the lookup command. In addition, you cannot put a field in a stats command without a function and you don't need the stats command. so try something like this: index="...*" "... events{}.name=ResourceCreated | bin _time span=1h | spath "events{}.tags.A" | rename "events{}.tags.A" AS "A" "events{}.tags.C" AS "C" | lookup Map.csv C OUTPUT D | table A B C D _time | collect index=_xyz_summary marker="search_name=test_new_query_4cols" Ciao. Giuseppe
Hi @avikc100 , why did you used xyseries after stats? please try: index="webmethods_prd" host="USPGH-WMA2AISP*" source="/apps/WebMethods/IntegrationServer/instances/default/logs/ExternalPAEU.log" ... See more...
Hi @avikc100 , why did you used xyseries after stats? please try: index="webmethods_prd" host="USPGH-WMA2AISP*" source="/apps/WebMethods/IntegrationServer/instances/default/logs/ExternalPAEU.log" ("success" OR "fail*") | eval status = if(searchmatch("success"), "Success", "Error") | stats count by source status | eval source=if(source="*PAEU.log", "Canada Pricing Call","XXX") Ciao. Giuseppe
Thanks, didn't realize we could do a by clause with tstats as well.
Since the field name has a dot in it, which is used as a string concatenator, have you tried putting the field name in single quotes? if('detail.accountId' == "1234567890", "AccountX", "UnknownAccou... See more...
Since the field name has a dot in it, which is used as a string concatenator, have you tried putting the field name in single quotes? if('detail.accountId' == "1234567890", "AccountX", "UnknownAccountName")
Where does 2-a2 come from? Assuming values in fieldB are unique, you could try something like this | makeresults format=csv data="ID,fieldA,fieldB 1,1-a1, 1,1-a2, 1,,1-b1 2,2-a1, 2,,2-b1 2,,2-b2 3,... See more...
Where does 2-a2 come from? Assuming values in fieldB are unique, you could try something like this | makeresults format=csv data="ID,fieldA,fieldB 1,1-a1, 1,1-a2, 1,,1-b1 2,2-a1, 2,,2-b1 2,,2-b2 3,,1-b1" ``` The lines above emulate the data you shared ``` | stats latest(fieldA) as fieldA list(fieldB) as fieldB by ID | mvexpand fieldB
Hi there, Splunk Community! First time poster! Whoo! Let me outline the situation, goal, and problem faced briefly: I have a field in a dataset called `detail.accountId` that is the number of an A... See more...
Hi there, Splunk Community! First time poster! Whoo! Let me outline the situation, goal, and problem faced briefly: I have a field in a dataset called `detail.accountId` that is the number of an AWS Account ID. My goal is to create a calculated field called "AccountName" for each `detail.accountId` ID that would theoretically look something like this: if(detail.accountId == "1234567890", "AccountX", "UnknownAccountName") The problem I'm facing is the eval expression is always coming out False, resulting in the AccountName field column to always display"UnknownAccountName". No matter if I use tostring(detail.accountId), trim(detail.accountId), match(detail.accountId), etc in the eval expression comparison, it's always false when the value "1234567890" definitely exists as the detail.accountId. Am I doing something incorrectly here that may be obvious to someone?   Thank you very much for the help! Tyler
Hello, I want to integrate Cloudflare with our Splunk Enterprise via logpull method of Cloudflare. In this method, via rest api I'll pull the logs from Cloudflare every 1 hour.   Can someone pleas... See more...
Hello, I want to integrate Cloudflare with our Splunk Enterprise via logpull method of Cloudflare. In this method, via rest api I'll pull the logs from Cloudflare every 1 hour.   Can someone please help me, how can I do that? Is there any add-on or app that I can use for calling the rest api? or is there any other methods that I can use?