All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @KhalidAlharthi , the indexing queue is full, probably because you don't have enough disk space or there are too data for the resources Indexers have. Ciao. Giuseppe
Hi @KhalidAlharthi , as I said, probably it was a temporary connectivity issue (in my project it was related to a Disaster Recovery test) that's quicky solved but Indexers require some time to reali... See more...
Hi @KhalidAlharthi , as I said, probably it was a temporary connectivity issue (in my project it was related to a Disaster Recovery test) that's quicky solved but Indexers require some time to realign data and sometimes it's better to perform a rolling restar. Ciao. Giuseppe
Hi @Mark_Heimer , check how you created the drilldown filter, because these are html codes. Ciao. Giuseppe
Hello Terence, My concern is not about "ALL" BT is not showing up. I am satisfied with what it is showing in BT list. But I am asking why it is creating "All other Traffic". Because my limit(200) i... See more...
Hello Terence, My concern is not about "ALL" BT is not showing up. I am satisfied with what it is showing in BT list. But I am asking why it is creating "All other Traffic". Because my limit(200) is not full yet so it should not create "All Other Traffic"  Unless my BT is full it should not create "All other Traffic" as per AppDynamics Documentation. Then in my case why it is creating "All other Traffic". Kindly let me know in brief regarding my concern. Thanks, Satishkumar
There is another way to achieve similar results. Instead of the metadata command (which is great in its own right), you can use the tstats command which might work a bit slower than metadata but can ... See more...
There is another way to achieve similar results. Instead of the metadata command (which is great in its own right), you can use the tstats command which might work a bit slower than metadata but can do more complicated stuff with indexed fields. | tstats values(source) AS source WHERE index=* source !='*log.2024-*' | mvexpand source | <the rest of your evals>  
What is a "DNS_v2" datamodel? It's not one of the CIM-defined ones. Your original search uses |datamodel Network_Resolution_DNS_v2 search whereas your tstats use | tstats summariesonly=true value... See more...
What is a "DNS_v2" datamodel? It's not one of the CIM-defined ones. Your original search uses |datamodel Network_Resolution_DNS_v2 search whereas your tstats use | tstats summariesonly=true values(DNS.query) as query FROM datamodel="DNS_v2" [...] Few more hints: 1. Use preformatted paragraph style or a code block to paste SPL - it helps in reability and prevents the forum interface from rendering some text as emojis and such. 2. What do you mean by "doesn't work"? Do you get an error? Or you simply get different results than expected? If so, how they differ? 3. There are two typical approaches to debugging SPL - either build it from the start adding commands one by one until they stop yielding proper results or start with the whole search and remove commands from the end one by one until they start producing proper results - then you know which step is the problematic one. 4. Often it's much easier for people to help you when you provide sample(s) of your data and describe what you want to do with it than posting some (sometimes fairly complicated) SPL without additional comments as to what you want to achieve.
I'm trying to import a csv file generated by the NiFi GetSplunk component. It retrieves events from a Splunk Instance SPL-01 and store them in a CSV file with the following header: _serial,_time,so... See more...
I'm trying to import a csv file generated by the NiFi GetSplunk component. It retrieves events from a Splunk Instance SPL-01 and store them in a CSV file with the following header: _serial,_time,source,sourcetype,host,index,splunk_server,_raw I do an indexed_extraction=CSV when I import the csv files on another spunk instance SPL-02. If I just import the file, the host will be the instance SPL-02 and I want the host to be SPL-01 I got past this by having a transform as follows: [mysethost] INGEST_EVAL = host=$field:host$   Question 1: That gives me correct host name set to SPL-01, but I still have a EXTRACTED_HOST field, when I look at events in Splunk.. I found the article below where I got the idea to use $field:host$, but it also has ":=" for assignment, that did not work for me, so I used the "=" and then it worked. I also tried setting the "$field:host$=null()" but that had no effect.. I found this article https://community.splunk.com/t5/Getting-Data-In/How-to-get-the-host-value-from-INDEXED-EXTRACTIONS-json/m-p/577392   Question 2: I have problem getting the data from time field in. I tried using the TIMESTAMP_FIELDS in props.conf for this import. I tried the following. TIMESTAMP_FIELDS=_time (Did not work) TIMESTAMP_FIELDS=$field:_time$ ( Did not work) I then renamed the header line so time was named: "xtime" instead and then I could use the props.conf and set the TIMESTAMP_FIELDS=xtime How can I use the _time field directly?  
1. While I think I've read somewhere some dirty tricks to import the events from evtx file, it's not something that's normally done. Usually you monitor the eventlog channels, not the evt(x) files th... See more...
1. While I think I've read somewhere some dirty tricks to import the events from evtx file, it's not something that's normally done. Usually you monitor the eventlog channels, not the evt(x) files themselves. 2. If you want to simulate a live system, it's usually not enough to ingest a batch of events from some earlier-gathered dump since the events will get indexed in the past. For such simulation stuff you usually use event generators like TA_eventgen.
Hi, Another strange thing that happens to me and i just realized is that when i refresh the page "incident Review" with correctly loaded filters and showing true notable results, the filter "source"... See more...
Hi, Another strange thing that happens to me and i just realized is that when i refresh the page "incident Review" with correctly loaded filters and showing true notable results, the filter "source" becomes something like this: source: Access%20-%20Excessive%20Failed%20Logins%20-%20Rule And no results are shown on the page after page refresh. thanks.
Hello Members,   i have problems between the peers and managing node (CM), I tried to identify the issue but i canno't find a possible way to fix it because i didn't notice any problems regarding t... See more...
Hello Members,   i have problems between the peers and managing node (CM), I tried to identify the issue but i canno't find a possible way to fix it because i didn't notice any problems regarding the connectivity    see the pic below        
I mean if you uncheck that option, you should see "ALL" the BTs (hopefully). When that option is enabled, it will only display BT that has performance data.
Hi, Can anyone with experience using SOCRadar Alarm addon help me figure out why the field "alarm asset" and "title" returned empty value? We're using Splunk v9.1.5 and SOCRadar Alarm COllector... See more...
Hi, Can anyone with experience using SOCRadar Alarm addon help me figure out why the field "alarm asset" and "title" returned empty value? We're using Splunk v9.1.5 and SOCRadar Alarm COllector v1.0.2.
Hello Terence, I have check and the option "Transactions with Performance Data"  is already checked. Please find the attached screenshot for your reference and provide any other alternate solutions... See more...
Hello Terence, I have check and the option "Transactions with Performance Data"  is already checked. Please find the attached screenshot for your reference and provide any other alternate solutions. Thanks Satishkumar
i have checked everything and it's appears the splunk saying connectivity issue but there is no issues. i think it's require support from splunk it self ....
From your screen shot, if you click on the Filters, is the option "Transactions with Performance Data" checked?
Corrected
Hi, i have problem with Data model search. This is my SPL: |datamodel Network_Resolution_DNS_v2 search| search DNS.message_type=Query |rename DNS.query as query | fields _time, query | streamstat... See more...
Hi, i have problem with Data model search. This is my SPL: |datamodel Network_Resolution_DNS_v2 search| search DNS.message_type=Query |rename DNS.query as query | fields _time, query | streamstats current=f last(_time) as last_time by query | eval gap=last_time - _time | stats count avg(gap) AS AverageBeaconTime var(gap) AS VarianceBeaconTime BY query | eval AverageBeaconTime=round(AverageBeaconTime,3), VarianceBeaconTime=round(VarianceBeaconTime,3) | sort -count | where VarianceBeaconTime < 60 AND count > 2 AND AverageBeaconTime>1.000 | table query VarianceBeaconTime count AverageBeaconTime and it's work fine but slowly, so i would like to change to Data Model. How looks like query ?  I have DM model DNS_v2 and it's work for another queries, but not for this. | tstats summariesonly=true values(DNS.query) as query FROM datamodel="DNS_v2" where DNS.message_type=Query groupby _time | mvexpand query | streamstats current=f last(_time) as last_time by query | eval gap=(last_time - _time) | stats count avg(gap) AS AverageBeaconTime var(gap) AS VarianceBeaconTime BY query | eval AverageBeaconTime=round(AverageBeaconTime,3), VarianceBeaconTime=round(VarianceBeaconTime,3) | sort -count | where VarianceBeaconTime < 60 AND count > 2 AND AverageBeaconTime>1.000 | table query VarianceBeaconTime count AverageBeaconTime Has anyone had this problem before?  
  Hello Splunk Community, I have .evtx files from several devices, and I would like to analyze them using Splunk Universal Forwarder (the agent). I want to set up the agent to continuously monitor ... See more...
  Hello Splunk Community, I have .evtx files from several devices, and I would like to analyze them using Splunk Universal Forwarder (the agent). I want to set up the agent to continuously monitor these files as if the data is live, so that I can apply Splunk Enterprise Security (ES) rules to them.
great!   Works as expected one correction:  it should be double quotes instead of single in search    | search source !="*log.2024-*"      
Hi Zack, So I checked with our team that manages our indexers / heavy forwarders / Splunk backend. I also checked the metrics.log on a server we are using in our Splunk support case, and couldn't se... See more...
Hi Zack, So I checked with our team that manages our indexers / heavy forwarders / Splunk backend. I also checked the metrics.log on a server we are using in our Splunk support case, and couldn't see any queues building up in the metrics.log - plus the sample server we are using (an SQL member server), doesn't really have a high level of traffic. During the period that the Security logs aren't sending, I can see data still coming in from the Windows Application Event Log, other Windows Event logs (like App Locker event logs, SMB auditing event logs) - so Event Log data is coming, but just not from the Security Log in the error periods. A restart of the UF causes it to re-process anything that is still local in the Security Event Log. We had an older case sort of like this for the Windows Defender Antivirus event logs - not capturing data - the outcome - Splunk added a new directive - channel_wait_time= - to cause the Splunk UF to retest the Event Log existed after not being able to access for a time period, and this would cause the data to start recapturing. It could be a similar directive needs to be added - but its not been required during the many years we have had splunk running. Recently they changed on the indexers on advice from Splunk from an ongoing case - about another issue  - so that bit is set as you mentioned useACK = false  * A value of "false" means the forwarder consider s the data fully processed when it finishes writing it to the network socket. in our setup, they currently have - they mentioned the value of false is a legacy behaviour. autoBatch = true * When set to 'true', the forwarder automatically sends chunks/events in batches to target receiving instance connection. The forwarder creates batches only if there are two or more chunks/events available in output connection queue. * When set to 'false', the forwarder sends one chunk/event to target receiving instance connection. This is old legacy behavior. * Default: true