All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

sure will give a try and what does (?ms) do?
Hi @blbr123, this seems to be a multiline og, try adding (?ms) at the beginning of the regex. Then test your regex in Splunk not outside Splunk. Ciao. Giuseppe
Hi @majilan1, you should create a lookup containing the perimeter to monitor (called e.g. perimeter.csv) containing at least the host fied and eventually also other information. Then you coul | ts... See more...
Hi @majilan1, you should create a lookup containing the perimeter to monitor (called e.g. perimeter.csv) containing at least the host fied and eventually also other information. Then you coul | tstats count WHERE index=* BY host | append [ | inputlookup perimeter.csv | eval count=0 | fields host count ] | stats sum(count) AS total BY host | eval status=if(total=0,"Missed","Present") | table host status Ciao. Giuseppe d run a search like the following
Yes I have checked in regex looks good. There are no other HF's before.  
Hi @blbr123, did you checked the regex in Splunk? If you could share some sample of your logs I can help you in this. are there other (one or more) HFs before the one where you located props and t... See more...
Hi @blbr123, did you checked the regex in Splunk? If you could share some sample of your logs I can help you in this. are there other (one or more) HFs before the one where you located props and transforms? The transofrmation muste be applied in the first full Splunk instance where data pass through. Ciao. Giuseppe
I am using ingest action to filter the log message before being indexed in splunk.. I want to include the message that matches only the keyword :ERROR: and :FATAL: rest all of the messages should ... See more...
I am using ingest action to filter the log message before being indexed in splunk.. I want to include the message that matches only the keyword :ERROR: and :FATAL: rest all of the messages should not be indexed. Whereas in splunk ingest action has the filter to only exclude message not the include
Hi @Siddharthnegi , as I said in my answers to you previous question, if you install the Splunk Dashboard Example app ( https://splunkbase.splunk.com/app/1603 ), you'll find the example "Null Search... See more...
Hi @Siddharthnegi , as I said in my answers to you previous question, if you install the Splunk Dashboard Example app ( https://splunkbase.splunk.com/app/1603 ), you'll find the example "Null Search Swapper, that describes how to replace a panel with a message when no results, that's exacly the feature you need. In the example there's the code to use in the dashboard, that you need only to customize for your searches and panels. What's the issue? Ciao. Giuseppe
Use this type of technique, so you set a token if there are results and the panel showing the table will display (depends=) and the panel showing the message will not display (rejects=) <table depen... See more...
Use this type of technique, so you set a token if there are results and the panel showing the table will display (depends=) and the panel showing the message will not display (rejects=) <table depends="$has_results$"> <search> <query> Your search </query> </search> <done> eval token="has_results">if($job.resultCount$&gt;0, 1, null())</eval> </done> </table> <html rejects="$has_results$"> <h1>There are no results</h1> </html>  
I want to show  a custom message when the panel shows count=0 , which means search is not giving any results but in future might give.
Hi Team, We are also configuring the Microsoft teams using http request template and controller has reverse proxy. and its throwing error connection refused.   We have tried proxy connection on con... See more...
Hi Team, We are also configuring the Microsoft teams using http request template and controller has reverse proxy. and its throwing error connection refused.   We have tried proxy connection on controller host and its connecting. Kindly suggest. Regards, Pallavi Lohar
Thanks, I'll review the maxQueueSize If the warning count was higher, such as 20 in your example. What would be the best way to determine a good value (in bytes) for maxSendQSize to avoid the slow ... See more...
Thanks, I'll review the maxQueueSize If the warning count was higher, such as 20 in your example. What would be the best way to determine a good value (in bytes) for maxSendQSize to avoid the slow indexer scenario?
Is this a ChatGPT answer - firstly the OP does not mention having the Splunk Enterprise Security app - A&I framework is part of ES and your example search seems to be related to a query that would po... See more...
Is this a ChatGPT answer - firstly the OP does not mention having the Splunk Enterprise Security app - A&I framework is part of ES and your example search seems to be related to a query that would populate an Identity registry in ES rather than anything to do with the OP's post. Secondly, the technique of search NOT [| inputlookup...] technique should never be recommended without a big warning on the use of subsearches which can perform terribly - I recently fixed a search using a NOT subsearch that was taking 18 minutes to evaluate the NOT criteria and reduced it to 9 seconds. Certainly, a lookup of users to validate against can be a valid solution, but this would depend on whether the OP wants to find a new user's first ever login vs checking if the user has not logged in for 30 days, which is not clear.  
You can fetch results of lookup using search: | inputlookup your_lookup.csv Replacing lookup data would be: ```your search here``` | outputlookup your_lookup.csv And then you can add(append)... See more...
You can fetch results of lookup using search: | inputlookup your_lookup.csv Replacing lookup data would be: ```your search here``` | outputlookup your_lookup.csv And then you can add(append) rows using: ```your search here``` | outlookup append=true your_lookup.csv
You need historic data of users to compare. You would need to configure Assets&Identities or save users to simple lookup. You can store results daily, weekly, monthly using this search: index=yo... See more...
You need historic data of users to compare. You would need to configure Assets&Identities or save users to simple lookup. You can store results daily, weekly, monthly using this search: index=your_users_index ``` Add or configure neccessary fields | eval bunit="your_bunit", startDate=strftime(now(),"%Y-%m-%d %H:%M:%S"), | stats count by email, identity, nick, UserId, "first", "last", JobTitle, phone, bunit, work_country, work_city, startDate | table email, identity, nick, UserId, "first", "last", JobTitle, phone, bunit, work_country, work_city, startDate | search NOT [| inputlookup users.csv | fields email ] | outputlookup append=true users.csv And later you can sort users startDate using this search: | inputlookup users.csv | sort - startDate Or get last month's new users: | inputlookup users.csv | eval epoch=strptime(startDate, "%Y-%m-%d %H:%M:%S") | where epoch>relative_time(now(), "-20d")
Hi All, My props and transforms is not working. Kept the props and transforms in the Heavy Forwarder. can anyone please assist. I want to drop the below lines from ingesting into Splunk but its n... See more...
Hi All, My props and transforms is not working. Kept the props and transforms in the Heavy Forwarder. can anyone please assist. I want to drop the below lines from ingesting into Splunk but its not working. #Date: 2024-05-03 00:00:01 #Fields: date time s-sitename s-computername s-ip cs-method cs-uri-stem cs-uri-query s-port cs-username c-ip cs-version cs(User-Agent) cs(Cookie) cs(Referer) cs-host sc-status sc-substatus sc-win32-status sc-bytes cs-bytes time-taken https props: [mysourcetype] TRANSFORMS-drop_header= drop_header Transforms: [drop_header] REGEX = ^#Date.+\n#Fields.+ DEST_KEY = queue FORMAT = nullQueue
Let this be a lesson for all who ask questions: Illustrate/explain your data (anonymize as needed), your desired result, and explain the logic between data and desired result in plain language withou... See more...
Let this be a lesson for all who ask questions: Illustrate/explain your data (anonymize as needed), your desired result, and explain the logic between data and desired result in plain language without SPL.  SPL should be after all the explanations, before illustrating the actual result from SPL, then explain why that result is different from desired one if it is not painfully obvious. Secondly, posting SPL without formatting discourages volunteers.  Third, SPL (and raw data) is best illustrated in code box.  Let me help so other volunteers do not have to do the hard work.     index=hum_stg_app "msg.OM_MsgType"=REQUEST msg.OM_Body.header.transactionId=* "msg.service_name"="fai-np-notification" "msg.OM_Body.header.templateType"=vsf_device_auth_otp_template "msg.OM_Body.header.channelType{}"=sms "msg.OM_Body.header.organization"=VSF | rename msg.OM_Body.header.transactionId as transactionId | eval lenth=len(transactionId) | sort 1000000 _time | dedup transactionId _time | search lenth=40 | rename _time as Time1 | eval Request_time=strftime(Time1,"%y-%m-%d %H:%M:%S") | stats count by Time1 transactionId Request_time | appendcols [| search index=hum_stg_app earliest=-30d fcr-np-sms-gateway "msg.service_name"="fcr-np-sms-gateway" "msg.TransactionId"=* "msg.NowSMSResponse"="{*Success\"}" | rename "msg.TransactionId" as transactionId_request | sort 1000000 _time | dedup transactionId_request _time | eval Time=case(like(_raw,"%fcr-np-sms-gateway%"),_time) | eval lenth=len(transactionId_request) | search lenth=40 | dedup transactionId_request | stats count by transactionId_request Time ] | eval Transaction_Completed_time=strftime(Time,"%y-%m-%d %H:%M:%S") | eval Time_dif=Time-Time1 | eval Time_diff=(Time_dif)/3600 | fields transactionId transactionId_request Request_time Transaction_Completed_time count Time_diff Request_time Time Time1     I took the pain to reverse engineer your intentions.  One thing I cannot understand is why you expect appendcols to not misalign transactionId between request and response. (I am quite convinced that Time_diff is only meaningful only when transactionId and transactionId_request match when there is such a field name such as transactionId.)  Additionally, using subsearch in the same dataset should be used only as a last resort.  This type of transaction-based calculations do not warrant such use. Let me try mind-reading a bit and state the goal of your search: Calculate the difference between the time request is send and the time response indicates completion for the same transactionId.  To do this, simply search both the request event and completion event in one search, then do a stats to find time range, the earliest (request) time and the latest (completion) time.  Like this index=hum_stg_app (("msg.OM_MsgType"=REQUEST msg.OM_Body.header.transactionId=* "msg.service_name"="fai-np-notification" "msg.OM_Body.header.templateType"=vsf_device_auth_otp_template "msg.OM_Body.header.channelType{}"=sms "msg.OM_Body.header.organization"=VSF) OR (fcr-np-sms-gateway "msg.service_name"="fcr-np-sms-gateway" "msg.TransactionId"=* "msg.NowSMSResponse"="{*Success\"}")) | eval transactionId = coalesce('msg.OM_Body.header.transactionId', 'msg.transactionId') | eval lenth=len(transactionId) | sort 1000000 _time | dedup transactionId _time | search lenth=40 | stats range(_time) as Time_diff min(_time) as Request_time max(_time) as Transaction_Completed_time by transactionId | eval Time_diff=Time_diff/3600 Two notes: I see you inserted earliest=-30d in the sub search (for completion message).  I do not know how that value is relative to earliest in the main search (for request message), so the above didn't adjust for that.  If anything, I assume that the request message has to be earlier, so the search window would necessarily be larger (or equal). Between Request_time (min) and Transaction_Completed_time (max), only one is necessary because Time_diff is already calculated by range.  I put both there to validate that range is not negative. (range function will always return positive number, so before taking one of those times out, do some testing.)
Try this index="cdr" "Tipo_Trafico"="*" "Codigo_error"="*" | eval Error_{Codigo_error}=if(Codigo_error="69" OR Codigo_error="10001", 1, 0) | stats count(eval(Tipo_Trafico="MT")) AS Total_MT sum(Err... See more...
Try this index="cdr" "Tipo_Trafico"="*" "Codigo_error"="*" | eval Error_{Codigo_error}=if(Codigo_error="69" OR Codigo_error="10001", 1, 0) | stats count(eval(Tipo_Trafico="MT")) AS Total_MT sum(Error_*) as Error_* | foreach Error_* [ eval Error_<<MATCHSTR>>_P=(('<<FIELD>>'*100/Total_MT)), ThresholdExceeded=if(Error_<<MATCHSTR>>_P > 10, 1, coalesce(ThresholdExceeded, 0)) ] | where ThresholdExceeded>0
Since I can get it to show me when the percentage of errors 69 and 10001 is greater than 10, with the following search it doesn't work, you can help me. index="cdr" | search "Tipo_Trafico"="*" "Cod... See more...
Since I can get it to show me when the percentage of errors 69 and 10001 is greater than 10, with the following search it doesn't work, you can help me. index="cdr" | search "Tipo_Trafico"="*" "Codigo_error"="*" | stats count(eval(Tipo_Trafico="MT")) AS Total_MT, count(eval(Codigo_error="69")) AS Error_69 | eval P_Error_69=((Error_69*100/Total_MT)) | stats count(eval(Tipo_Trafico="MT")) AS Total_MT, count(eval(Codigo_error="10001")) AS Error_10001 | eval P_Error_10001=((Error_10001*100/Total_MT)) | stats count by P_Error_69, P_Error_10001 | where count>10
If warning count is 1, then it's not a big issue.  What it indicates is out of maxQueueSize bytes tcpout queue, one connection has occupied a large space. Thus TcpOutputProcessor will get pauses. ma... See more...
If warning count is 1, then it's not a big issue.  What it indicates is out of maxQueueSize bytes tcpout queue, one connection has occupied a large space. Thus TcpOutputProcessor will get pauses. maxQueueSize is per pipeline and is shared by all target connections per pipeline. You may want to increase maxQueueSize( double the size).
This setting definitely looks useful for slow receivers, but how would I determine when to use it, and an appropriate value? For example you have mentioned: WARN AutoLoadBalancedConnectionStrategy ... See more...
This setting definitely looks useful for slow receivers, but how would I determine when to use it, and an appropriate value? For example you have mentioned: WARN AutoLoadBalancedConnectionStrategy [xxxx TcpOutEloop] - Current dest host connection nn.nn.nn.nnn:9997, oneTimeClient=0, _events.size()=20, _refCount=2, _waitingAckQ.size()=4, _supportsACK=1, _lastHBRecvTime=Thu Jan 20 11:07:43 2024 is using 20214400 bytes. Total tcpout queue size is 26214400. Warningcount=20  I note that you have Warningcount=20, a quick check in my environment shows Warningcount=1, if i'm just seeing the occasional warning I'm assuming tweaking this setting would be of minimal benefit?   Furthermore, how would I appropriately set the bytes value? I'm assuming it's per-pipeline, and the variables involved might relate to volume per-second per-pipline, any other variables? Any example of how this would be tuned and when?   Thanks