All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

There is nothing to forgive. Most of us are not native English speakers so sometimes conveying your thoughts to words can be tricky So you have your data ingested "properly" and just want to rend... See more...
There is nothing to forgive. Most of us are not native English speakers so sometimes conveying your thoughts to words can be tricky So you have your data ingested "properly" and just want to render your events on output of your search as json structures? That's actually quite simple. <yoursearch> | tojson
Hello @ITWhisperer,   Thanks for your response. I was trying to add the search you have provided but failed to get desired value, can you please elaborate further how to use this solution you provi... See more...
Hello @ITWhisperer,   Thanks for your response. I was trying to add the search you have provided but failed to get desired value, can you please elaborate further how to use this solution you provided. 
Shouldn't your eval be something like this | eval Grade=case(GPA=1,"D", GPA>1 AND GPA<=1.3,"D+")]
Hi @PickleRick , forgive me, I fear I explained myself bad. Windows logs are coming directly from Domain Controllers. They are ingested using UF and they transitate through HF, so the final flow is:... See more...
Hi @PickleRick , forgive me, I fear I explained myself bad. Windows logs are coming directly from Domain Controllers. They are ingested using UF and they transitate through HF, so the final flow is: DCs with UF installed -> HF -> Splunk Cloud environment In addiction to this, the TA_windows is installed on both HF an Splunk Cloud. So, we don't want ingest data from third party forwarder; we want to know if, with this environment and the above addon installed, we are able to see logs on JSON format, when we perform searches on  SH, or we can see only Legacy and XML one because, with this environment and this addon, no other format are supported.
| makeresults | eval _raw="LIST F.PROTOCOL @ID PROTOCOL.ID PROCESS.DATE TIME.MSECS K.USER APPLICATION LEVEL.FUNCTION ID REMARK ... See more...
| makeresults | eval _raw="LIST F.PROTOCOL @ID PROTOCOL.ID PROCESS.DATE TIME.MSECS K.USER APPLICATION LEVEL.FUNCTION ID REMARK PAGE 1 11:34:02 23 NOV 2023 @ID............ 202309260081340532.2 @ID............ 202309260081340532.21 PROTOCOL.ID.... 202309260081340532.21 PROCESS.DATE... 20230926 TIME.MSECS..... 11:15:32:934 K.USER......... INPUTTER APPLICATION.... AC.INWARD.ENTRY LEVEL.FUNCTION. 1 ID............. REMARK......... ENQUIRY - AC.INTERFACE.REPORT @ID............ 202309260081340523.16 @ID............ 202309260081340523.16 PROTOCOL.ID.... 202309260081340523.16 PROCESS.DATE... 20230926 TIME.MSECS..... 11:15:23:649 K.USER......... INPUTTER APPLICATION.... AC.INWARD.ENTRY LEVEL.FUNCTION. 1 ID............. REMARK......... ENQUIRY - AC.INTERFACE.REPORT" ``` The lines above set up sample data in line with your example ``` | rex max_match=0 "(?ms)(?<event>^\@ID.*?REMARK.*?$)" | mvexpand event | rex max_match=0 field=event "(?m)(?<namevalue>.+\.+\s.*$)" | streamstats count as row | mvexpand namevalue | rex field=namevalue "(?<name>[^\s]+(?<!\.))\.*?\s(?<value>.*$)" | eval {name}=value | fields - name value namevalue event | stats values(*) as * by row | fields - row
[stream:ip] TRUNCATE = 0 did not help. Any other suggestions?
I suppose, judging by the section of Answers you posted it into, you want to ingest the json-formatted windows events supplied by third party "forwarder" (whatever it is - NXLog, Kiwi, winlogbeat...)... See more...
I suppose, judging by the section of Answers you posted it into, you want to ingest the json-formatted windows events supplied by third party "forwarder" (whatever it is - NXLog, Kiwi, winlogbeat...). You can ingest the events in any way you want but unless they are in one of the two formats supported by TA_windows, you're on your own with parsing and such. See for example, the https://community.splunk.com/t5/Getting-Data-In/Connect-winlogbeat-log-format-to-Splunk-TA-Windows/m-p/669783#M112304 thread for similar question.
Hi, I want to find the grade based on my Case condition but my query is not working as expected. | eval Grade=case(Cumulative=1,"D", Cumulative>1 AND Cumulative<=1.3,"D+")] Example: My Grade shoul... See more...
Hi, I want to find the grade based on my Case condition but my query is not working as expected. | eval Grade=case(Cumulative=1,"D", Cumulative>1 AND Cumulative<=1.3,"D+")] Example: My Grade should be based on the avg(GPA) If Avg(GPA) is 1 Grade at the bottom (Avg Grade)should be D , If it is between 1-1.3 then it should be D+      
Hello All, I am testing the data inputs for Splunk Addon for ServiceNow and there is a requirement to include only certain fields in the data. I tried to set the filtering using the "Included Param... See more...
Hello All, I am testing the data inputs for Splunk Addon for ServiceNow and there is a requirement to include only certain fields in the data. I tried to set the filtering using the "Included Parameters" option in the input and added the desired comma separated fields. However, I am not able to see those fields. What I see is only the two default id and time fields. I have included the following fields :  dv_active,dv_assignment_group,dv_assigned_to,dv_number,dv_u_resolution_category But in the output I see only below fields: Is there anything that I am doing wrong? Regards, Himani.
Hello community, Below is my sample log file I want to extract each individual piece of event(starting from @ID to REMARK) from the log file. I tried to achieve this by using following regex: (^@I... See more...
Hello community, Below is my sample log file I want to extract each individual piece of event(starting from @ID to REMARK) from the log file. I tried to achieve this by using following regex: (^@ID[\s\S]*?REMARK.*$) This regex is taking the whole log file as single event. Attaching the snapshot below.  Also tried to alter the props.conf by using the same regex: props.conf [t24] SHOULD_LINEMERGE=False LINE_BREAKER=(^@ID[\s\S]*?REMARK.*$) NO_BINARY_CHECK=true disabled=false INDEXED_EXTRACTIONS = csv   LIST F.PROTOCOL @ID PROTOCOL.ID PROCESS.DATE TIME.MSECS K.USER APPLICATION LEVEL.FUNCTION ID REMARK PAGE 1 11:34:02 23 NOV 2023 @ID............ 202309260081340532.21 @ID............ 202309260081340532.21 PROTOCOL.ID.... 202309260081340532.21 PROCESS.DATE... 20230926 TIME.MSECS..... 11:15:32:934 K.USER......... INPUTTER APPLICATION.... AC.INWARD.ENTRY LEVEL.FUNCTION. 1 ID............. REMARK......... ENQUIRY - AC.INTERFACE.REPORT @ID............ 202309260081340523.16 @ID............ 202309260081340523.16 PROTOCOL.ID.... 202309260081340523.16 PROCESS.DATE... 20230926 TIME.MSECS..... 11:15:23:649 K.USER......... INPUTTER APPLICATION.... AC.INWARD.ENTRY LEVEL.FUNCTION. 1 ID............. REMARK......... ENQUIRY - AC.INTERFACE.REPORT   Attaching the screenshot of the data which I'm getting on Splunk by using the regex mentioned above. Also attaching the snapshot of regex result which i have checked earlier online. I want my data to be shown in table form following is the example snapshot of how I want my data to be appear on Splunk.  
Use autoregress something like this | makeresults | eval a=mvrange(1,102) | mvexpand a | autoregress a p=1-100
While @bowesmana 's solution should work be aware that working with structured data using just regexes can lead to unforeseen problems.
Probably many people had various issues. That's the thing with computers - if you do something badly, you have problems But seriously - in order to set up monitoring properly you need to know wha... See more...
Probably many people had various issues. That's the thing with computers - if you do something badly, you have problems But seriously - in order to set up monitoring properly you need to know what to monitor, think how you want to monitor that and verify if your monitoring solution supports that. It's a completely different thing if you want to just monitor server performance parameters via SNMP than if you want to do functional checks with Selenium to verify if the whole setup is working properly.
You mean something like | streamstats <optionally current=f> window=10 list(myfield) <optionally BY another_field>
@inventsekar , They  recommended upgrading or updating the web.conf file in on-prem environment. How we can do this  as its not the cloud its an enterprise.
I have this query, where I want to build a dataset from a variable and its 4 previous values. I can solve this like so:       | makeresults | eval id=split("a,b,c,d,e,f,g",",") | eval a=split("1,... See more...
I have this query, where I want to build a dataset from a variable and its 4 previous values. I can solve this like so:       | makeresults | eval id=split("a,b,c,d,e,f,g",",") | eval a=split("1,2,3,4,5,6,7",",") | eval temp=mvzip(id,a,"|") | mvexpand temp | rex field=temp "(?P<id>[^|]+)\|(?P<a>[^|]+)" | fields - temp | streamstats current=false last(a) AS a_lag1 | streamstats current=false last(a_lag1) AS a_lag2 | streamstats current=false last(a_lag2) AS a_lag3 | streamstats current=false last(a_lag3) AS a_lag4 | where isnotnull(a_lag4) | table id a*       However, if I want to extend this to say 100 previous values, this code would become convoluted and slow. I imagine there must be a better way to accomplish this goal, however my research has not produced any alternative. Any ideas are appreciated.
Hi, Our firewalls generate around 1000 High and Critical alerts daily. I would like to create uses related to these notifications but not sure what will be the best way to handle its number. Could s... See more...
Hi, Our firewalls generate around 1000 High and Critical alerts daily. I would like to create uses related to these notifications but not sure what will be the best way to handle its number. Could somebody advise what will be the best way to implement this please?
Hi there, what are the best practices to migrate from Azure sentinel to Splunk, we want to migrate sources, historical data and use cases.
Hi Splunkers, I have a request by my customer. We have, like in many prod environments, Windows logs. We know that we can see events on Splunk Console, with Splunk Add-on for Microsoft Windows , in ... See more...
Hi Splunkers, I have a request by my customer. We have, like in many prod environments, Windows logs. We know that we can see events on Splunk Console, with Splunk Add-on for Microsoft Windows , in 2 way: Legacy format (like  the original ones on AD) or XML. Is it possible to see them on JSON format? If yes, we can achieve this directly with above addon or we need other tools?
Without any concrete data it's just fortune telling. Check processes, check i/o saturation, check memory usage. Verify if it's even Splunk that's causing cpu hogging. Restarting processes blindly w... See more...
Without any concrete data it's just fortune telling. Check processes, check i/o saturation, check memory usage. Verify if it's even Splunk that's causing cpu hogging. Restarting processes blindly will not help much probably without addressing the underlying cause. Has anything been changed recently? Upgraded?