All Topics

Top

All Topics

I have a search that returns values for dates and I want to calculate the changes between the dates.  What I want would look something like this. index 1-Aug 8-Aug Aug 8 change ... See more...
I have a search that returns values for dates and I want to calculate the changes between the dates.  What I want would look something like this. index 1-Aug 8-Aug Aug 8 change 15-Aug Aug 15 Change 22-Aug Aug 22 change 29-Aug Aug 29 change index1 5.76 5.528 96% 5.645 102% 7.666 136% 6.783 88% index2 0.017 0.023 135% 0.036 157% 0.033 92% 14.985 45409% index3 2.333 2.257 97% 2.301 102% 2.571 112% 0.971 38% index4 2.235 1.649 74% 2.01 122% 2.339 116% 2.336 100% index5 19.114 14.179 74% 14.174 100% 18.46 130% 19.948 108% I have a search that returns the values without the change calculations | loadjob savedsearch="me@email.com:splunk_instance_monitoring:30 Days Ingest By Index" | eval day_of_week=strftime(_time,"%a"), date=(strftime(_time,"%Y-%m-%d")) | search day_of_week=Tue | fields - _time day_of_week | transpose header_field=date | rename column AS index | sort index | addcoltotals label=Totals labelfield=index If the headers were something like "week 1" "week 2" I can get what I want, but with date headers that change very time, I've tried using foreach to iterate through and caclulate the changes from one column to the next but haven't been able to come up with the right solution.  Can anyone help?
Is there a way to see who modified system settings in Splunk Cloud?  For example we recently had an issue where an Splunk IP allow list was modified however we can not seem to find the activity in th... See more...
Is there a way to see who modified system settings in Splunk Cloud?  For example we recently had an issue where an Splunk IP allow list was modified however we can not seem to find the activity in the _internal or _audit indexes.  
Hello. I have Splunk Enterprise (https://splunk6.****.net run from a browser) and am running a query collecting results and saving as a report (to get the output periodically i.e. summary indexing).... See more...
Hello. I have Splunk Enterprise (https://splunk6.****.net run from a browser) and am running a query collecting results and saving as a report (to get the output periodically i.e. summary indexing). How do I connect to my Postgres database installed on my PC to send/store this data? DB connect is not supported for my system (deprecated / sunset) Thanks
Hello All,  I'm having a task to measure the compliancy of Security solution onboarded on the SIEM, that means i have to regularly check if the solution is onboarded by checking if there is any log... See more...
Hello All,  I'm having a task to measure the compliancy of Security solution onboarded on the SIEM, that means i have to regularly check if the solution is onboarded by checking if there is any logs generating in a specific index,  For Example my search query will be : index=EDR | stats count | eval status=if((count > "0"),"Compliant","Not Compliant") | fields -count Results that i should have: status Compliant   I have a lookup table called compliance.csv and i need to update the status from "Not Compliant" to "Compliant".  Solution Status EDR Not Compliant DLP Not Compliant   how can i utilize outputlookup command to update the table not overwrite or append.     
Hi,   So, I got an issue where I have a log and the log has a field called ERROR_MESSAGES for each event that ends in an error. The other events that have a NULL value under ERROR_MESSAGES are su... See more...
Hi,   So, I got an issue where I have a log and the log has a field called ERROR_MESSAGES for each event that ends in an error. The other events that have a NULL value under ERROR_MESSAGES are successful events. So, I'm trying to get a percentage of the successful events over the total events. Ths is the query I built but when I run the search success rate comes back with no percentage value and I know there's 338/3190 successful events. Any help would go along way I've been struggling I feel like my SPL is getting better but man this one has me scratching my head. | inputlookup fm4143_3d.csv | stats count(FLOW_ID) as total | appendpipe [| inputlookup fm4143_3d.csv | where isnull(ERROR_MESSAGE) | stats count as success] | eval success_rate = ((success/total)*100) | fields success_rate  
Hi All, I am trying to calculate 2 values by multiplication and then compare these 2 values on a column/bar chart.  My query to calculate the 2 values is: source="transaction file.csv" Description... See more...
Hi All, I am trying to calculate 2 values by multiplication and then compare these 2 values on a column/bar chart.  My query to calculate the 2 values is: source="transaction file.csv" Description="VDP-WWW.AMAZON" | rename "Debit Amount" AS DebitAmount | eval totalprime=DebitAmount*59 | eval totalvalue=286*5.99   However I am having trouble displaying them on a chart to compare their values. I would ideally like them to both be on the X axis and have the Y axis as a generic 'total value' or similar just so I can easily see how one value compares against the other.  When I attempt to do this with a query like the below,  I have to select 1 field as X axis and 1 as Y axis which leads the chart being incorrect.  source="transaction file.csv" Description="VDP-WWW.AMAZON" | rename "Debit Amount" AS DebitAmount | eval totalprime=DebitAmount*59 | eval totalvalue=286*5.99 | chart sum(totalprime) as prime, sum(totalvalue) as value   I want totalvalue as a column and totalprime as another column, next to each other. To allow me to easily compare the total amount of each next to each other.  Can anyone help with this? Thanks.  
Hi, I have an Elastic DB that receive logs from various services directly and I want to send these logs to Splunk Enterprise. Is there any documentation about install instruction of the Elasticse... See more...
Hi, I have an Elastic DB that receive logs from various services directly and I want to send these logs to Splunk Enterprise. Is there any documentation about install instruction of the Elasticsearch Data Integrator? I couldn't  config it to make it work and I don't find any documentation on how to install and configure this add-on. Please help me with that.@larmesto  Kind Regards, Mohammad
Hi,  I have a Splunk Heavy Forwarder routing data to a Splunk Indexer. I also have a search head configured that performs distributed search on my indexer. My Heavy forwarder has a forwarding l... See more...
Hi,  I have a Splunk Heavy Forwarder routing data to a Splunk Indexer. I also have a search head configured that performs distributed search on my indexer. My Heavy forwarder has a forwarding license, so it does not index the data. However, I still want to use props.conf and transforms.conf on my forwarder. These configs are: =============================================================== transforms.conf [extract_syslog_fields] DELIMS = "|" FIELDS = "datetime", "syslog_level", "syslog_source", "syslog_message" =============================================================== props.conf [router_syslog] TIME_FORMAT = %a %b %d %H:%M:%S %Y MAX_TIMESTAMP_LOOKAHEAD = 24 SHOULD_LINEMERGE = false LINE_BREAKER = ([\r\n]+) TRUNCATE = 10000 TRANSFORMS-extracted_fields = extract_syslog_fields So what I expected is that when I search the index on my search head, I would see the fields  "datetime", "syslog_level", "syslog_source", "syslog_message" . However, this does not occur. On the otherhand, if I configure field extractions on the search-head, this works just fine and my syslog data is split up into those fields. Am I misunderstanding how Transforms work ? Is the heavy forwarder incapable of splitting up my syslog into different fields based on a delimiter because it's not indexing the data ?  Any help or advice would be highly appreciated. Thank you so much!
Hi there, Splunk Community! First time poster! Whoo! Let me outline the situation, goal, and problem faced briefly: I have a field in a dataset called `detail.accountId` that is the number of an A... See more...
Hi there, Splunk Community! First time poster! Whoo! Let me outline the situation, goal, and problem faced briefly: I have a field in a dataset called `detail.accountId` that is the number of an AWS Account ID. My goal is to create a calculated field called "AccountName" for each `detail.accountId` ID that would theoretically look something like this: if(detail.accountId == "1234567890", "AccountX", "UnknownAccountName") The problem I'm facing is the eval expression is always coming out False, resulting in the AccountName field column to always display"UnknownAccountName". No matter if I use tostring(detail.accountId), trim(detail.accountId), match(detail.accountId), etc in the eval expression comparison, it's always false when the value "1234567890" definitely exists as the detail.accountId. Am I doing something incorrectly here that may be obvious to someone?   Thank you very much for the help! Tyler
Hello, I want to integrate Cloudflare with our Splunk Enterprise via logpull method of Cloudflare. In this method, via rest api I'll pull the logs from Cloudflare every 1 hour.   Can someone pleas... See more...
Hello, I want to integrate Cloudflare with our Splunk Enterprise via logpull method of Cloudflare. In this method, via rest api I'll pull the logs from Cloudflare every 1 hour.   Can someone please help me, how can I do that? Is there any add-on or app that I can use for calling the rest api? or is there any other methods that I can use?
Hello, If I want to use a external file that contains 2 columns C and D and use those mappings to a existing query that displays a table with the value C (so it is like a case statement that gives a... See more...
Hello, If I want to use a external file that contains 2 columns C and D and use those mappings to a existing query that displays a table with the value C (so it is like a case statement that gives a D value for each value of C by checking a external csv file); what is wrong in this syntax - index="...*" "... events{}.name=ResourceCreated | bin _time span=1h | spath "events{}.tags.A" | dedup "events{}.tags.A" | inputcsv append=t Map.csv | stats D as D by C | table "events{}.tags.A" "events{}.tags.B" "events{}.tags.C" "events{}.tags.D" _time | collect index=_xyz_summary marker="search_name=\"test_new_query_4cols\"" I get error - Error in 'stats' command: The number of wildcards between field specifier '*' and rename specifier 'D' do not match. Note: empty field specifiers implies all fields, e.g. sum() == sum(*). I tried switch case but showing default value always. Thanks
Hi, i have a requirement to create single value visual with trendline.  I have looked at sample queries on Dashboard studio examples hub.  Below is my base query.         |tstats dc(host) as dist... See more...
Hi, i have a requirement to create single value visual with trendline.  I have looked at sample queries on Dashboard studio examples hub.  Below is my base query.         |tstats dc(host) as distinct_count where index=okta sourcetype="OktaIM2:log"   Expected result:  Something like this I have been trying below 2 searches but neither of two is showing the expected result.    |tstats dc(host) as distinct_host where index=okta sourcetype="OktaIM2:log" | chart count(distinct_host) by _time OR |tstats dc(host) as distinct_host where index=okta sourcetype="OktaIM2:log" | timechart count(distinct_host) by _time   If i try the below query without tstats,  it works but i need to use tstats from a performance point of view.     index=okta sourcetype="OktaIM2:log" | chart dc(host) by _time span=1h   Any suggestion how to generate single value trendline with tstats?  
We have multiple docker containers and there are some logs (created by our application, same log gets updated) inside those containers. We want to monitor those logs every 5 mins using Splunk UF whi... See more...
We have multiple docker containers and there are some logs (created by our application, same log gets updated) inside those containers. We want to monitor those logs every 5 mins using Splunk UF which is outside the container. Splunk UF will send data to Splunk indexer in another server. Can you please tell me options to do this.  
Is there any difference between a empty macro with    ()   or   ""   I see search with both both return results but do not behave the same as  index=* So what does these empty macro do actu... See more...
Is there any difference between a empty macro with    ()   or   ""   I see search with both both return results but do not behave the same as  index=* So what does these empty macro do actually? Any clues what logs or where I can further drill down this?
Hi,  I have a single search that produces the following table where fieldA and fieldB are arbitrary strings that may be duplicate. This is an exact representation of each event where each event... See more...
Hi,  I have a single search that produces the following table where fieldA and fieldB are arbitrary strings that may be duplicate. This is an exact representation of each event where each event may have a key "fieldA" or a key "fieldB" but not both but they always have an ID and Timestamp Timestamp ID fieldA fieldB 11115 1   "z" 11245 1 "a"   11378 1 "b"   11768 1   "d" 11879 1   "d" 12550 2 "c"   13580 2   "e" 15703 2   "f" 18690 3   "g" and I need help to transform the data as follows. ID fieldA fieldB 1 "b" "d" 1 "b" "d" 2 "c" "e" 2 "c" "f" 3   "g" Thanks to suggestion below, I have tried `stats latest(fieldA) list(fieldB)` but I would prefer to not have any multivalued fields For every distinct value for "fieldA", the latest record with that value would be kept and any records with that ID occuring before that record would be discard. There is no requirement to have 2 searches. Hope that makes it more clear and easier.
Not sure if this is feasible. Basically I would like a chart that shows the average of a statistic for different nodes and distinct count of different nodes. so the 2 searches would be something like... See more...
Not sure if this is feasible. Basically I would like a chart that shows the average of a statistic for different nodes and distinct count of different nodes. so the 2 searches would be something like: 1. index=xxx sourcetype=yyy |timechart avg(stat1) by node 2. index=xxx sourcetype=yyy|timechart dc(node) Both searches would showup on the same timechart panel for the same period with the same time span. Sorry if this is unclear, happy to clarify. I tried eventstats, append, appendcols, and join, but they do not seem to work for this. Could be I'm misusing them though.
The original query: host="MEIPC" source="WinEventLog:Application" OR source="WinEventLog:Security" OR source="WinEventLog:System" |chart count by source A could be solution I could not get to wor... See more...
The original query: host="MEIPC" source="WinEventLog:Application" OR source="WinEventLog:Security" OR source="WinEventLog:System" |chart count by source A could be solution I could not get to work: | top limit=10 class showperc=f countfield="source" | reverse | transpose header_field="Class" column_name="Class" | search class="source" So I tried searching all over to change the color of the bars of each of 3 sources I gathered data from. I put it in the dashboard and I noticed that it groups it all under an encompassing source, without an individual option for each source. This is labeled under the X axis. However, when I try to change the color of the bars, only changing the color of count which is the Y axis changes the color of the bars. This confuses me because I would think that I can simply change the color options in the menus of dashboard for each individual  X axis source but instead its the Y axis count that changes the color of the bars, and there is no option to change the coloration to the X axis source. What also confuses me, is when I look at statistics, there are 3 sources to gather the data from. Please leave a comment if you have the time, thank you so much Splunk Community!
I got a visualization that counts the total amount of errors using a lookup. Instead of the actual number of events I'd like to get a percentage of specifically errors. Image attached for reference  ... See more...
I got a visualization that counts the total amount of errors using a lookup. Instead of the actual number of events I'd like to get a percentage of specifically errors. Image attached for reference      | inputlookup fm4143_3d.csv | stats count(ERROR_MESSAGE) ```| appendpipe [| stats count as message | eval message=if(message==0,"", " ")] | fields - message ```
I have a field called eventtime in my logs, but the time is 19 characters long in epoch time (which goes to nanoseconds).  The field is in the middle of the events, but I want to use it as the timest... See more...
I have a field called eventtime in my logs, but the time is 19 characters long in epoch time (which goes to nanoseconds).  The field is in the middle of the events, but I want to use it as the timestamp.  However, when I, through the UI, define the TIME_PREFIX, it won't recognize it.  However, there is another field that also has epoch time, but only 10 characters.  When I use it, it works...just doesn't give me the nanoseconds.  So, it's not a syntax issue.  There are no periods in the timestamp.  How can I fix this - using the UI for testing is easier to get feedback, but if I need to modify it in props.conf, that's fine? Additional context: The data comes in in json format, but only uses single quotes.  I fixed this by using sedcmd in props.conf to swap the single quote with double quotes.  In the TIME_PREFIX box (again, in the UI), I used single quotes as double quotes didn't work (which makes sense). 'eventtime': '1707613171105400540' 'itime_t': 1707613170'  
I am using splunk cloud. As admin i created a new user but the user is yet to get an email notification with the necessary login details. Please what might be the issue