All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi Guys, I am very new to Splunk Cloud and how things work here. Our current setup is: 1. UF(Linux) -> Heavy Forwarder(On Prem) -> Indexer/Search Head(Splunk Cloud) 2. Created a new index quifa... See more...
Hi Guys, I am very new to Splunk Cloud and how things work here. Our current setup is: 1. UF(Linux) -> Heavy Forwarder(On Prem) -> Indexer/Search Head(Splunk Cloud) 2. Created a new index quifapp on Splunk Cloud. 2. UF is already connected to HF (just dummy connection and verified that its sending _internal logs to Splunk Cloud) as can be seen from the logs: 02-20-2024 11:22:11.394 +1100 INFO AutoLoadBalancedConnectionStrategy [566068 TcpOutEloop] - Found currently active indexer. Connected to idx=192.168.23.32:9997:0, reuse=1   3. New request is to forward logs from location /app/quif/quif.log to Splunk Cloud. 4. I have put the required config under below location /opt/splunkforwarder/etc/apps/quif/local and it has two files: #cat inputs.conf [monitor:///app/quif/quif.log*] sourcetype=quif_requests disabled=0 index=quifapp # cat props.conf [quif_requests] DATETIME_CONFIG = LINE_BREAKER = ([\r\n]+) NO_BINARY_CHECK = true category = Custom disabled = false pulldown_type = true EXTRACT-AgentId = ^\w+:\s+(?P<AgentId>\w+) ####   4. I restarted SplunkForwarder but can't see any logs coming in the Cloud.   Is there any additional config that's required at any level. How can I troubleshoot?
Hi @bowesmana , Thank you!!!! this query worked |tstats count where index=xxx host=host1 sourcetype=newsourcetype by PREFIX(type:) _time | rename type: as Types | timechart span=1d values(count) b... See more...
Hi @bowesmana , Thank you!!!! this query worked |tstats count where index=xxx host=host1 sourcetype=newsourcetype by PREFIX(type:) _time | rename type: as Types | timechart span=1d values(count) by Types | addtotals  
@ITWhisperer is right, but I've used a time as a fraction to show on the Y axis, something like | rex field=End "\d+/\d+/\d+ (?<h>\d+):(?<m>\d+)\s+" | eval t=round(h+(m/100),2) | chart max(t) over D... See more...
@ITWhisperer is right, but I've used a time as a fraction to show on the Y axis, something like | rex field=End "\d+/\d+/\d+ (?<h>\d+):(?<m>\d+)\s+" | eval t=round(h+(m/100),2) | chart max(t) over Date by System It's a bit of a kludge, as it will represent 10:50 as 10.50 so will be half way between 10 and 11, but you could represent it as a true fraction, i.e. /60 not 100, but then the numbers are not so useful, i.e. when 10:50 shows as 10.83  
I'm not sure why rex is properly matching the beginning of the value I am looking for (NameofTeam), but it also matches and includes everything after it. As I understand it, my search should stop mat... See more...
I'm not sure why rex is properly matching the beginning of the value I am looking for (NameofTeam), but it also matches and includes everything after it. As I understand it, my search should stop matching when it reaches "}, after matching the team name. What am I doing wrong?   index=test | rex field=_raw "Key\": \"Owner\", \"ValueString\": \"(?<Team>.+)\"},"   Sample Data: {"Key": "OtherKey", "ValueString": "OtherValue"}, {"Key": "Owner", "ValueString": "NameofTeam"}, {"Key": "OtherKey", "ValueString": "OtherValue"}, Expected Output: NameofTeam Actual Output: NameofTeam"}, {"Key": "OtherKey", "ValueString": "OtherValue"},  
Yes, I was looking for that where statement.  My mistake, I tried using |search which didn't work.  Thanks!
Also note that you don't need bin + timechart as you can simply do | timechart span=1d...
Not sure if this is what you're trying to do, but you can always 'remove' the dates you don't want with a where clause If compliant7sma field does not exist for the first 6 days, then just add | wh... See more...
Not sure if this is what you're trying to do, but you can always 'remove' the dates you don't want with a where clause If compliant7sma field does not exist for the first 6 days, then just add | where isnotnull(compliant7sma) at the end, or to do it on a time based filter | where _time>=relative_time(now(), "-30d@d")
Your values for id=... and base=... should be in quotes, not sure how that is not giving you an error - also you do not need earliest or latest for the post processing search and you cannot override ... See more...
Your values for id=... and base=... should be in quotes, not sure how that is not giving you an error - also you do not need earliest or latest for the post processing search and you cannot override the base search. The simplest way to diagnose why a post process search is not working is to open the panel's search in a new window and see what the search is producing and then remove line by line of SPL from the end of the search until you can see what is going wrong. Those searches _look_ ok so try the above to figure out what's wrong 
Something as simple as this should work search ... | stats range(_time) as duration by userId, traceId | stats sum(duration) as duration by userId | eval connected=tostring(duration, "duration") ba... See more...
Something as simple as this should work search ... | stats range(_time) as duration by userId, traceId | stats sum(duration) as duration by userId | eval connected=tostring(duration, "duration") based on your data example, where traceId indicates the connection boundaries, so stats by the traceId and userId will get the duration for each session and then stats to sum the durations. If there is no disconnection event, then the range(_time) will be 0, so any active connections will not be counted. If you want to calculate that based on the time of the search you could do this | stats count min(_time) as start_time range(_time) as duration by userId, traceId | addinfo | eval duration=if(duration=0 AND count=1, info_max_time - start_time, duration) | stats sum(duration) as duration by userId | eval connected=tostring(duration, "duration") which would treat the ending time range of the search to be the ending range of an open connection. and finally to handle all cases where you may have no start or not end event, try this one search... | eval type=if(match(message,"created"),1,2) | stats count values(type) as types min(_time) as first_event_time range(_time) as duration by userId, traceId | addinfo ``` Handle created but no disconnect ``` | eval duration=if(duration=0 AND count=1 AND mvcount(types)=1 AND types=1, info_max_time - first_event_time, duration) ``` Handle disconnect but no created ``` | eval duration=if(duration=0 AND count=1 AND mvcount(types)=1 AND types=2, first_event_time - info_min_time, duration) | stats sum(duration) as duration by userId | eval connected=tostring(duration, "duration")
Hi @bowesmana , there was a typo error, now I have provided complete query, please check and provide your inputs. Dashboard xml without base search: <form> <label>Dashboard</label> <row> <panel> ... See more...
Hi @bowesmana , there was a typo error, now I have provided complete query, please check and provide your inputs. Dashboard xml without base search: <form> <label>Dashboard</label> <row> <panel> <title> Metric by hosts</title> <table> <search> <query> |mstats avg(os.mem.utilized) as Memory_Used where index=metricsidx host=host1 OR host=host2 span=1d |table Memory_Used |foreach Memory* [|eval <<FIELD>>=round('<<FIELD>>',2)] <earliest>-2d@d</earliest> <latest>now</latest> </search> </table> </panel> </row> Dashboard xml with base search :(NOT GETTING OUTPUT) <form> <label>Dashboard</label> <search id=mstats> <query> |mstats avg(os.mem.utilized) as Memory_Used where index=metricsidx host=host* by host span=1d |foreach Memory* [|eval <<FIELD>>=round('<<FIELD>>',2)] <earliest>-2d@d</earliest> <latest>now</latest> </search> <row> <panel> <title> Metric by hosts</title> <table> <search base=mstats> <query> |search host=host1 OR host=host2 |table Memory_Used </query> <earliest>-2d@d</earliest> <latest>now</latest> </search> </table> </panel> </row>  
What is not working - your first query cannot work as you do a  ... |table Memory_Used |foreach avg* [|eval <<FIELD>>=round('<<FIELD>>',2)] There are no fields starting with avg* This should work ... See more...
What is not working - your first query cannot work as you do a  ... |table Memory_Used |foreach avg* [|eval <<FIELD>>=round('<<FIELD>>',2)] There are no fields starting with avg* This should work for the base search | mstats avg(os.mem.utilized) as Memory_Used where index=metricsidx host IN(host1, host2, host3,host4, host5, host6, host7, host8, host9, host10) by host span=1d | eval MemoryUsed=round(MemoryUsed,2) but I am puzzled what you need a post processing search for as there is only one field for each host, MemoryUsed. What are you trying to show in different panels?  
@vishwa I suspect your fields are not actually coming out as Model1 etc, i.e. the may have some strange characters in there. You can either try to figure out what the Types field values are before yo... See more...
@vishwa I suspect your fields are not actually coming out as Model1 etc, i.e. the may have some strange characters in there. You can either try to figure out what the Types field values are before you do the timechart by doing something like |tstats count where index=xxx host=host1 sourcetype=newsourcetype by PREFIX(type:) _time |rename type: as Types | head 10 | eval types=":".Types.":", len=len(Types) | table Types types len to see if there are any odd characters or the len does not come out as 8. The simples thing to do though is to not care about the names and just add 'addtotals', i.e. |tstats count where index=xxx host=host1 sourcetype=newsourcetype by PREFIX(type:) _time | rename type: as Types | timechart span=1d values(count) by Types | addtotals which will add up all the numeric fields and create a new field called Total
Hello all I hope this is the right forum, I am having some trouble with the Barracuda Email Security Gateway Add-on and field extraction. We have a Splunk Cloud subscription and I am using an Ubunt... See more...
Hello all I hope this is the right forum, I am having some trouble with the Barracuda Email Security Gateway Add-on and field extraction. We have a Splunk Cloud subscription and I am using an Ubuntu server with rsyslog and a universal forwarder to send syslog data to our Splunk Cloud instance. I have the Barracuda Email Security Gateway Add-on installed in our Splunk Cloud. I have the data from our Barracuda Email Gateway system going into a folder called /var/log/syslog_barracuda.log. I have my inputs.conf file configured as follows: [monitor:///var/log/syslog_barracuda.log] disabled = 0 sourcetype = barracuda In our Splunk Cloud, I see the events, and they have the "barracuda" sourcetype as expected. The problem is, no field extraction is applied to these events. Is there something I am missing? The Add-on only shows to add the lines to the inputs.conf file. Any help would be appreciated, I am new to Splunk and trying to wrap my head around everything.
|mstats avg(os.mem.utilized) as Memory_Used where index=metricsidx host=host1 OR host=host2 span=1d |table Memory_Used |foreach avg* [|eval <<FIELD>>=round('<<FIELD>>',2)] We have a dashboard where... See more...
|mstats avg(os.mem.utilized) as Memory_Used where index=metricsidx host=host1 OR host=host2 span=1d |table Memory_Used |foreach avg* [|eval <<FIELD>>=round('<<FIELD>>',2)] We have a dashboard where we are calculating memory for different hosts.   For example, is the above query. So, we don't want to repeat this query for all panels. instead, i want to create a base search with all host like below query. But it's not working. Can anyone help on this: Base Search: |mstats avg(os.mem.utilized) as Memory_Used where index=metricsidx host IN(host1, host2, host3,host4, host5, host6, host7, host8, host9, host10) by host span=1d |foreach avg* [|eval <<FIELD>>=round('<<FIELD>>',2)] Sub search: |table Memory_Used
Query: |tstats count where index=xxx host=host1 sourcetype=newsourcetype by PREFIX(type:) _time |rename type: as Types |timechart span=1d values(count) by Types |eval Total=Model1 +Model2+ Model3+... See more...
Query: |tstats count where index=xxx host=host1 sourcetype=newsourcetype by PREFIX(type:) _time |rename type: as Types |timechart span=1d values(count) by Types |eval Total=Model1 +Model2+ Model3+ Model4 |fillnull value=0 OUTPUT: _time Model1 Model2 Model3 Model4 Total 2021-04-12 2 0 1 4 0 2021-04-13 0 4 0 0 0 2021-04-14 8 2 10 4 24 2021-04-15 3 0 2 8 0 2021-04-16 1 4 2 2 9   EXPECTED OUTPUT: _time Model1 Model2 Model3 Model4 Total 2021-04-12 2 0 1 4 7 2021-04-13 0 4 0 0 4 2021-04-14 8 2 10 4 24 2021-04-15 3 0 2 8 13 2021-04-16 1 4 2 2 9
I have requirement to calculate total time a user has been connected to system, for that I have logs as below which stats that when user connected and disconnected, how can we calculate his total tim... See more...
I have requirement to calculate total time a user has been connected to system, for that I have logs as below which stats that when user connected and disconnected, how can we calculate his total time connected by userId.   { "timestamp": "2024-02-19T20:15:01,544Z", "message": "Connection created.", "sessionId": "383c4345", "operation": "Connect", "spanId": "578045e3cfdbb598", "traceId": "578045e3cfdbb598", "userId": "test12" } { "timestamp": "2024-02-19T21:19:01,544Z", "message": "Connection disconnected.", "sessionId": "383c4345", "operation": "Disconnect", "spanId": "578045e3cfdbb598", "traceId": "578045e3cfdbb598", "userId": "test12" } { "timestamp": "2024-02-19T22:24:01,544Z", "message": "Connection created.", "sessionId": "383cxfe45", "operation": "Connect", "spanId": "fdcaf3d14e630be3", "traceId": "fdcaf3d14e630be3", "userId": "test12" } { "timestamp": "2024-02-19T23:17:01,544Z", "message": "Connection disconnected.", "sessionId": "383cxfe45", "operation": "Disconnect", "spanId": "fdcaf3d14e630be3", "traceId": "fdcaf3d14e630be3", "userId": "test12" } { "timestamp": "2024-02-19T19:45:01,544Z", "message": "Connection created.", "sessionId": "7490647a", "operation": "Connect", "spanId": "c6cc6afa1a770fc9", "traceId": "c6cc6afa1a770fc9", "userId": "test_234" } { "timestamp": "2024-02-19T20:19:01,544Z", "message": "Connection disconnected.", "sessionId": "7490647a", "operation": "Disconnect", "spanId": "c6cc6afa1a770fc9", "traceId": "c6cc6afa1a770fc9", "userId": "test_234" } { "timestamp": "2024-02-19T21:34:01,544Z", "message": "Connection created.", "sessionId": "f9cbd2f", "operation": "Connect", "spanId": "fdcaf3d14e630be3", "traceId": "fdcaf3d14e630be3", "userId": "test_234" } { "timestamp": "2024-02-19T23:20:01,544Z", "message": "Connection disconnected.", "sessionId": "f9cbd2f", "operation": "Disconnect", "spanId": "fdcaf3d14e630be3", "traceId": "fdcaf3d14e630be3", "userId": "test_234" }     Have given example for 2 userId test12 and test_234, has connected /disconnected 2 times.  And I am expecting results as below. User Total Time Connected  test12 1 hour 59 min test_234 2 hours 20 min      
Edit: I realized I may have given the wrong impression by posting my example JSON object. The intention was to extract a value from any of the 3 fields into the single column.  1. Yes, field3 is "... See more...
Edit: I realized I may have given the wrong impression by posting my example JSON object. The intention was to extract a value from any of the 3 fields into the single column.  1. Yes, field3 is "Resource Name".  2. Yes 3. I used "Attribute Name" and "ID" as placeholder values for the real data (just sanitizing the values/fields). They were just meant to give an example. In hindsight I think I should have stuck with generic names "field1-3".   Output format: event.AccountId event.CloudPlatform event.CloudService Additional Details xxxxxxxxxx CloudProvider Service name-resource-121sg6fe ID-12132423fgrg other-named-attribute
Now we are deep into the weeds of actual data.  The number of rows is dependent only on how many unique claimNumber regex "(?i) claim # *(?<claimNumber>\S+)" extracts from both source filters.  A mea... See more...
Now we are deep into the weeds of actual data.  The number of rows is dependent only on how many unique claimNumber regex "(?i) claim # *(?<claimNumber>\S+)" extracts from both source filters.  A meaningful test would be (index="myindex" "/app1/service/site/upload failed" AND "source=Web" AND "confirmationNumber=ND_*") | rex "(?i) claim # *(?<claimNumber>\S+)" | stats dc(clmNumber) as clmCount dc(claimNumber)claimCount Do they give 23?  75?  one give 75, one 23? (According to your description, claimCount should be 23.)  If the two counts are equal, there is nothing to change. If you get different counts for clmNumber and claimNumber, you can do another test (index="myindex" "/app1/service/site/upload failed" AND "source=Web" AND "confirmationNumber=ND_*") | rex "(?i) claim # *(?<claimNumber>\S+)" | table _time clmNumber claimNumber _raw Then, you need to refine the regex.  Post sample data for which claimNumber is not extracted if you need help with regex.
I am trying to create a db input keeping in mind the data that we have in db. It has fields like PKEY, STARTTIME, ENDTIME etc. If I use PKEY or STARTTIME in Rising column, I am bound to miss some row... See more...
I am trying to create a db input keeping in mind the data that we have in db. It has fields like PKEY, STARTTIME, ENDTIME etc. If I use PKEY or STARTTIME in Rising column, I am bound to miss some rows. Also, both PKEY and STARTTIME fields are not unique. So, I am trying to use CONCATE(PKEY,STARTTIME). SELECT BTACHTASK, ASOF, PKEY, STARTTIME, ENDTIME, CONCAT(PKEY, STARTTIME) AS combination FROM CORE_MCA.SPLUNK_BATCH_STATES_VW WHERE CONCAT(PKEY,STARTTIME) > ? ORDER BY CONCAT(PKEY,STARTTIME) ASC I am using Rising input and checkpoint should have been combination but I am not getting any results in Rising Column. I am getting the error java.sql.SQLException: Missing IN or OUT parameter at index:: 1. What am I doing wrong here? Also, sometimes the normal query also gives this error but after refreshing and selecting the connection once again I get the required data.
Hi @ITWhisperer , Thank you it worked, but when there are no counts it's showing as empty values in table. I used fillnull value=0, but it's not working. How to do this???