All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @Mark_Heimer , check how you created the drilldown filter, because these are html codes. Ciao. Giuseppe
Hello Terence, My concern is not about "ALL" BT is not showing up. I am satisfied with what it is showing in BT list. But I am asking why it is creating "All other Traffic". Because my limit(200) i... See more...
Hello Terence, My concern is not about "ALL" BT is not showing up. I am satisfied with what it is showing in BT list. But I am asking why it is creating "All other Traffic". Because my limit(200) is not full yet so it should not create "All Other Traffic"  Unless my BT is full it should not create "All other Traffic" as per AppDynamics Documentation. Then in my case why it is creating "All other Traffic". Kindly let me know in brief regarding my concern. Thanks, Satishkumar
There is another way to achieve similar results. Instead of the metadata command (which is great in its own right), you can use the tstats command which might work a bit slower than metadata but can ... See more...
There is another way to achieve similar results. Instead of the metadata command (which is great in its own right), you can use the tstats command which might work a bit slower than metadata but can do more complicated stuff with indexed fields. | tstats values(source) AS source WHERE index=* source !='*log.2024-*' | mvexpand source | <the rest of your evals>  
What is a "DNS_v2" datamodel? It's not one of the CIM-defined ones. Your original search uses |datamodel Network_Resolution_DNS_v2 search whereas your tstats use | tstats summariesonly=true value... See more...
What is a "DNS_v2" datamodel? It's not one of the CIM-defined ones. Your original search uses |datamodel Network_Resolution_DNS_v2 search whereas your tstats use | tstats summariesonly=true values(DNS.query) as query FROM datamodel="DNS_v2" [...] Few more hints: 1. Use preformatted paragraph style or a code block to paste SPL - it helps in reability and prevents the forum interface from rendering some text as emojis and such. 2. What do you mean by "doesn't work"? Do you get an error? Or you simply get different results than expected? If so, how they differ? 3. There are two typical approaches to debugging SPL - either build it from the start adding commands one by one until they stop yielding proper results or start with the whole search and remove commands from the end one by one until they start producing proper results - then you know which step is the problematic one. 4. Often it's much easier for people to help you when you provide sample(s) of your data and describe what you want to do with it than posting some (sometimes fairly complicated) SPL without additional comments as to what you want to achieve.
I'm trying to import a csv file generated by the NiFi GetSplunk component. It retrieves events from a Splunk Instance SPL-01 and store them in a CSV file with the following header: _serial,_time,so... See more...
I'm trying to import a csv file generated by the NiFi GetSplunk component. It retrieves events from a Splunk Instance SPL-01 and store them in a CSV file with the following header: _serial,_time,source,sourcetype,host,index,splunk_server,_raw I do an indexed_extraction=CSV when I import the csv files on another spunk instance SPL-02. If I just import the file, the host will be the instance SPL-02 and I want the host to be SPL-01 I got past this by having a transform as follows: [mysethost] INGEST_EVAL = host=$field:host$   Question 1: That gives me correct host name set to SPL-01, but I still have a EXTRACTED_HOST field, when I look at events in Splunk.. I found the article below where I got the idea to use $field:host$, but it also has ":=" for assignment, that did not work for me, so I used the "=" and then it worked. I also tried setting the "$field:host$=null()" but that had no effect.. I found this article https://community.splunk.com/t5/Getting-Data-In/How-to-get-the-host-value-from-INDEXED-EXTRACTIONS-json/m-p/577392   Question 2: I have problem getting the data from time field in. I tried using the TIMESTAMP_FIELDS in props.conf for this import. I tried the following. TIMESTAMP_FIELDS=_time (Did not work) TIMESTAMP_FIELDS=$field:_time$ ( Did not work) I then renamed the header line so time was named: "xtime" instead and then I could use the props.conf and set the TIMESTAMP_FIELDS=xtime How can I use the _time field directly?  
1. While I think I've read somewhere some dirty tricks to import the events from evtx file, it's not something that's normally done. Usually you monitor the eventlog channels, not the evt(x) files th... See more...
1. While I think I've read somewhere some dirty tricks to import the events from evtx file, it's not something that's normally done. Usually you monitor the eventlog channels, not the evt(x) files themselves. 2. If you want to simulate a live system, it's usually not enough to ingest a batch of events from some earlier-gathered dump since the events will get indexed in the past. For such simulation stuff you usually use event generators like TA_eventgen.
Hi, Another strange thing that happens to me and i just realized is that when i refresh the page "incident Review" with correctly loaded filters and showing true notable results, the filter "source"... See more...
Hi, Another strange thing that happens to me and i just realized is that when i refresh the page "incident Review" with correctly loaded filters and showing true notable results, the filter "source" becomes something like this: source: Access%20-%20Excessive%20Failed%20Logins%20-%20Rule And no results are shown on the page after page refresh. thanks.
Hello Members,   i have problems between the peers and managing node (CM), I tried to identify the issue but i canno't find a possible way to fix it because i didn't notice any problems regarding t... See more...
Hello Members,   i have problems between the peers and managing node (CM), I tried to identify the issue but i canno't find a possible way to fix it because i didn't notice any problems regarding the connectivity    see the pic below        
I mean if you uncheck that option, you should see "ALL" the BTs (hopefully). When that option is enabled, it will only display BT that has performance data.
Hi, Can anyone with experience using SOCRadar Alarm addon help me figure out why the field "alarm asset" and "title" returned empty value? We're using Splunk v9.1.5 and SOCRadar Alarm COllector... See more...
Hi, Can anyone with experience using SOCRadar Alarm addon help me figure out why the field "alarm asset" and "title" returned empty value? We're using Splunk v9.1.5 and SOCRadar Alarm COllector v1.0.2.
Hello Terence, I have check and the option "Transactions with Performance Data"  is already checked. Please find the attached screenshot for your reference and provide any other alternate solutions... See more...
Hello Terence, I have check and the option "Transactions with Performance Data"  is already checked. Please find the attached screenshot for your reference and provide any other alternate solutions. Thanks Satishkumar
i have checked everything and it's appears the splunk saying connectivity issue but there is no issues. i think it's require support from splunk it self ....
From your screen shot, if you click on the Filters, is the option "Transactions with Performance Data" checked?
Corrected
Hi, i have problem with Data model search. This is my SPL: |datamodel Network_Resolution_DNS_v2 search| search DNS.message_type=Query |rename DNS.query as query | fields _time, query | streamstat... See more...
Hi, i have problem with Data model search. This is my SPL: |datamodel Network_Resolution_DNS_v2 search| search DNS.message_type=Query |rename DNS.query as query | fields _time, query | streamstats current=f last(_time) as last_time by query | eval gap=last_time - _time | stats count avg(gap) AS AverageBeaconTime var(gap) AS VarianceBeaconTime BY query | eval AverageBeaconTime=round(AverageBeaconTime,3), VarianceBeaconTime=round(VarianceBeaconTime,3) | sort -count | where VarianceBeaconTime < 60 AND count > 2 AND AverageBeaconTime>1.000 | table query VarianceBeaconTime count AverageBeaconTime and it's work fine but slowly, so i would like to change to Data Model. How looks like query ?  I have DM model DNS_v2 and it's work for another queries, but not for this. | tstats summariesonly=true values(DNS.query) as query FROM datamodel="DNS_v2" where DNS.message_type=Query groupby _time | mvexpand query | streamstats current=f last(_time) as last_time by query | eval gap=(last_time - _time) | stats count avg(gap) AS AverageBeaconTime var(gap) AS VarianceBeaconTime BY query | eval AverageBeaconTime=round(AverageBeaconTime,3), VarianceBeaconTime=round(VarianceBeaconTime,3) | sort -count | where VarianceBeaconTime < 60 AND count > 2 AND AverageBeaconTime>1.000 | table query VarianceBeaconTime count AverageBeaconTime Has anyone had this problem before?  
  Hello Splunk Community, I have .evtx files from several devices, and I would like to analyze them using Splunk Universal Forwarder (the agent). I want to set up the agent to continuously monitor ... See more...
  Hello Splunk Community, I have .evtx files from several devices, and I would like to analyze them using Splunk Universal Forwarder (the agent). I want to set up the agent to continuously monitor these files as if the data is live, so that I can apply Splunk Enterprise Security (ES) rules to them.
great!   Works as expected one correction:  it should be double quotes instead of single in search    | search source !="*log.2024-*"      
Hi Zack, So I checked with our team that manages our indexers / heavy forwarders / Splunk backend. I also checked the metrics.log on a server we are using in our Splunk support case, and couldn't se... See more...
Hi Zack, So I checked with our team that manages our indexers / heavy forwarders / Splunk backend. I also checked the metrics.log on a server we are using in our Splunk support case, and couldn't see any queues building up in the metrics.log - plus the sample server we are using (an SQL member server), doesn't really have a high level of traffic. During the period that the Security logs aren't sending, I can see data still coming in from the Windows Application Event Log, other Windows Event logs (like App Locker event logs, SMB auditing event logs) - so Event Log data is coming, but just not from the Security Log in the error periods. A restart of the UF causes it to re-process anything that is still local in the Security Event Log. We had an older case sort of like this for the Windows Defender Antivirus event logs - not capturing data - the outcome - Splunk added a new directive - channel_wait_time= - to cause the Splunk UF to retest the Event Log existed after not being able to access for a time period, and this would cause the data to start recapturing. It could be a similar directive needs to be added - but its not been required during the many years we have had splunk running. Recently they changed on the indexers on advice from Splunk from an ongoing case - about another issue  - so that bit is set as you mentioned useACK = false  * A value of "false" means the forwarder consider s the data fully processed when it finishes writing it to the network socket. in our setup, they currently have - they mentioned the value of false is a legacy behaviour. autoBatch = true * When set to 'true', the forwarder automatically sends chunks/events in batches to target receiving instance connection. The forwarder creates batches only if there are two or more chunks/events available in output connection queue. * When set to 'false', the forwarder sends one chunk/event to target receiving instance connection. This is old legacy behavior. * Default: true  
The issue you're facing with index clustering, where one indexer peer shows an `addingbatch` status and fluctuates between `up` and `batchadding`, while the other peer shows `up` and then goes to `pe... See more...
The issue you're facing with index clustering, where one indexer peer shows an `addingbatch` status and fluctuates between `up` and `batchadding`, while the other peer shows `up` and then goes to `pending` status, suggests potential problems with data replication, network connectivity, or resource allocation. ### Possible Causes and Solutions: 1. **Data Replication Lag or Bottlenecks**: - **Cause**: The `addingbatch` status indicates that the peer is adding data from the replication queue to the index, but it is either delayed or encountering issues. This can occur if there is a backlog in the replication queue or if the peer is unable to process the data quickly enough. - **Solution**: - Check for any network latency or packet loss between the indexer peers. Ensure there is sufficient bandwidth for replication traffic. - Verify if the indexers have adequate disk I/O performance. If the disks are slow or under heavy load, consider upgrading the storage or optimizing disk usage. 2. **Connectivity Issues Between Peers**: - **Cause**: The fluctuating statuses (`up` to `batchadding` or `pending`) could indicate intermittent network connectivity issues between the indexer peers or between the indexers and the cluster master. - **Solution**: - Review the network configuration and ensure that all indexer peers and the cluster master are correctly configured to communicate with each other. - Check the Splunk internal logs (`index=_internal`) for any network-related errors or warnings (`source=*splunkd.log`). 3. **Cluster Master Configuration or Load Issues**: - **Cause**: The cluster master may be overwhelmed or improperly configured, leading to inconsistent status updates for the peers. - **Solution**: - Verify the cluster master’s health and ensure it is not overloaded. - Review the cluster master's logs for any errors or configuration issues that might be causing delays in managing the peer status. 4. **Resource Constraints on Indexer Peers**: - **Cause**: The indexers might be under-resourced (CPU, memory, or disk space), causing them to be slow in processing incoming data or managing replication. - **Solution**: - Check the hardware resources (CPU, RAM, disk space) on each indexer. Ensure they meet the requirements for the volume of data being handled. - Increase the allocated resources or optimize the current configuration for better performance. 5. **Splunk Version Compatibility or Bugs**: - **Cause**: There may be bugs or version compatibility issues if different versions of Splunk are running on the cluster master and indexer peers. - **Solution**: - Make sure that all instances (cluster master and indexer peers) are running compatible versions of Splunk. - Review the [Splunk Release Notes](https://docs.splunk.com/Documentation/Splunk/latest/ReleaseNotes) for any known issues or bugs that might match your problem. 6. **Configuration Issues**: - **Cause**: Misconfiguration in the `indexes.conf` or other related files may cause replication or status reporting issues. - **Solution**: - Review your `indexes.conf`, `server.conf`, and `inputs.conf` files for any configuration errors. Ensure that all settings are aligned with best practices for index clustering. ### Next Steps: 1. **Log Analysis**: - Review the `_internal` logs (`splunkd.log`) on all affected peers and the cluster master. Look for errors, warnings, or messages related to clustering or replication. 2. **Network Diagnostics**: - Run network diagnostics to ensure there are no connectivity issues between indexer peers or between the peers and the cluster master. 4. contacting Splunk Support for further assistance. By systematically checking these areas, you can identify the root cause and apply the appropriate solution to stabilize the indexer peers in your Splunk cluster.
Hi @FQzy  To set up the Cisco Security Cloud app in Splunk, you can find detailed guidance and documentation on the (https://splunkbase.splunk.com/app/7404). ### Key Steps for Setup: 1. Download a... See more...
Hi @FQzy  To set up the Cisco Security Cloud app in Splunk, you can find detailed guidance and documentation on the (https://splunkbase.splunk.com/app/7404). ### Key Steps for Setup: 1. Download and Install: You can download the Cisco Security Cloud app from Splunkbase. Make sure to follow the specific instructions for installation, which include compatibility checks and required add-ons. 2. Configure Data Inputs: The app requires the configuration of several data inputs based on your Cisco security products (e.g., Firewalls, Intrusion Protection, Web Security, etc.). The documentation provides step-by-step guidance for each type. 3. Troubleshooting Common Errors: For issues like "failed to create an input," ensure that all prerequisites (like appropriate permissions and network settings) are met. You may need to consult the app's [Splunkbase page](https://splunkbase.splunk.com/app/5558) for specific troubleshooting tips. If you run into errors or specific issues during setup, it might also be helpful to check the community discussions and resources available on the Splunk website. For your reference you can refer this document: https://developer.cisco.com/docs/cloud-security/cloud-security-app-for-splunk/#cloud-security-app-for-splunk