All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

so after running this query i am not able to see 1 indexer . How can i resolve it.
Thanks @syedabuthahira  Please could you share the props/transforms you are referring to here so we can understand why they might not be applying the filtering you are expecting? Also, if you can s... See more...
Thanks @syedabuthahira  Please could you share the props/transforms you are referring to here so we can understand why they might not be applying the filtering you are expecting? Also, if you can share raw samples of the junk data this will also be great.  Please let me know how you get on and consider adding karma to this or any other answer if it has helped. Regards Will
@syedabuthahira  Universal Forwarder (UF): Designed to collect logs (e.g., Windows Event Logs) and send them directly to Splunk indexers or intermediate Heavy Forwarders. Heavy Forwarder (HF): A f... See more...
@syedabuthahira  Universal Forwarder (UF): Designed to collect logs (e.g., Windows Event Logs) and send them directly to Splunk indexers or intermediate Heavy Forwarders. Heavy Forwarder (HF): A full Splunk instance that can parse, filter, and route data before sending it to indexers. Useful if data needs transformation or routing logic. Syslog Forwarder: Typically used for network devices or non-Splunk agents sending logs in syslog format (e.g., UDP/TCP). Windows logs aren’t natively syslog-formatted—they’re in Event Log format (EVTX) so converting them to syslog adds complexity. If a UF is installed on a Windows machine, it’s generally unnecessary and inefficient to forward Windows logs to a syslog server first. The UF can send logs directly to indexers or an HF, avoiding extra hops, format conversion (e.g., to syslog), and potential data loss or latency.
So @Poojitha  - If you ran the dashboard search now for last 60 minutes, it would search from the start of the minute 60 minutes ago, until now. For example 08:40:00.000 to 09:40:12.000 (Note that "N... See more...
So @Poojitha  - If you ran the dashboard search now for last 60 minutes, it would search from the start of the minute 60 minutes ago, until now. For example 08:40:00.000 to 09:40:12.000 (Note that "Now" in this case is 09:40:12 - 12 seconds after the start of 09:40). If you now ran the same search in the Splunk search bar, 10 seconds later (for example) you would be searching 08:40:00.000 to 09:40:22.000 - so, the reason this is interesting is that you may have more counts and more errors in the last 10 seconds. To verify the counts you will need to run the search over a specific time window in both the dashboard and Splunk Search bar.   Please let me know how you get on and consider adding karma to this or any other answer if it has helped. Regards Will
@syedabuthahira  First, I recommend sending the Windows logs directly to the indexers or via a heavy forwarder. I'm not sure why you're routing them through a syslog forwarder when the Universal For... See more...
@syedabuthahira  First, I recommend sending the Windows logs directly to the indexers or via a heavy forwarder. I'm not sure why you're routing them through a syslog forwarder when the Universal Forwarder is already installed.
I'm trying to display a simple input type and html text side by side but they are appearing vertically. I have tried few CSS options but none of them worked. Sample Panel code <panel id="panel1"> ... See more...
I'm trying to display a simple input type and html text side by side but they are appearing vertically. I have tried few CSS options but none of them worked. Sample Panel code <panel id="panel1"> <input type="text" token="tk1" id="tk1id" searchWhenChanged=“true”> <label>Refine further?</label> <prefix> | where </prefix> </input> <html id="htmlid"> <p>count: $job.resultCount$</p> </html> <event> <search> . . . . </search> </event> </panel> How can I make input and html text appear side by side and events at the bottom? My requirement is to have this in achieved in single panel in XML dashboard. Thanks for any help!
Thank you @ITWhisperer @livehybrid  for looking into it.   I am receiving the logs from multiple windows hosts through the syslog server and we have the UF on the syslog server through the UF we ar... See more...
Thank you @ITWhisperer @livehybrid  for looking into it.   I am receiving the logs from multiple windows hosts through the syslog server and we have the UF on the syslog server through the UF we are forwarding the data to the indexer. the actual breaking issue is-  when I am searching the logs  for the particular source by using the spl index="win"  sourcetype="*Snare*" | table host source   if i run the above query i should see only the hostname thats what we have configured our props and trnasforms but i am seeing the junk values in the host field actually its breaking in between somewhere in the event.
@kfchen  Unfortunately, Splunk does not provide a built-in way to disable replication specifically for cold storage. Your idea of using a cron job to delete replicated buckets in cold storage is cr... See more...
@kfchen  Unfortunately, Splunk does not provide a built-in way to disable replication specifically for cold storage. Your idea of using a cron job to delete replicated buckets in cold storage is creative, but it comes with risks. Deleting replicated buckets manually might lead to data inconsistency and potential search issues.  If Indexer A is under maintenance and the main bucket is on Indexer A, Indexer B should still be able to query the replicated bucket. However, this depends on the search factor (SF) being met. If the SF is not met, searches might not return complete result. If Indexer A is down and Indexer B detects that it needs to meet the replication factor (RF), it will attempt to replicate the bucket to another indexer. This process is part of Splunk's mechanism to ensure data availability and redundancy. Configuring all indexers to refer to a single shared file path for cold storage is possible. You would need to modify the indexes.conf file to set the coldPath to a shared directory.However, ensure that the shared storage is reliable and has sufficient performance to handle the load. Before proceeding with any changes, it's crucial to test your setup in a staging environment to avoid any disruptions in your production environment. Please contact Splunk support or PS.  NOTE:- Official answer from support is to NOT remove any replicated buckets even with clustering disabled, as they may be marked as the Primary Bucket. It is best to let them age out.
You are probably going to have to be a bit more precise. What does "junk values" mean? What does "breaking" mean? What configuration values do you have on the UF and SH? Is this issue isolated to spe... See more...
You are probably going to have to be a bit more precise. What does "junk values" mean? What does "breaking" mean? What configuration values do you have on the UF and SH? Is this issue isolated to specific hosts, data sources, times of day, sourcetypes, etc?
What does "When I run the query" mean? Are you copying/rewriting the search in a search window, or are you using the "Open in Search" button on the pie-chart? Does the timeframe for the results of t... See more...
What does "When I run the query" mean? Are you copying/rewriting the search in a search window, or are you using the "Open in Search" button on the pie-chart? Does the timeframe for the results of the search match the time frame you think you are using? For example:  
So i currently have an indexer cluster, the RF and SF is 2. My hot/warm Db and my cold bucket will be on different storage disks in a cluster which has its own replication features, and i happen to h... See more...
So i currently have an indexer cluster, the RF and SF is 2. My hot/warm Db and my cold bucket will be on different storage disks in a cluster which has its own replication features, and i happen to have EC+2:1, meaning the data on my cold storage will be replicated twice.  As a result, i would like to disable replication on my cold storage, but there is currently no way to do that in Splunk(or not that I know of). I am thinking of writing a cron job that deletes all replicated bucket in the cold storage disk. For this to happen, all of the indexers should be referring to a single shared file path in the cold storage. However, this begs the question: Will the search still works as per normal? Lets say the main bucket is on Indexer A  and the replicated copy is in indexer B. But my indexer A is currently under maintenance, would it be possible for lets say, index B, to query the bucket with indexer A's bid? Additionally, will indexer B sense that something is wrong and try to replicate the bucket in warm bucket again?
Hi @syedabuthahira  Are you able to give us an example of the junk you are seeing in the logs? Redact anything sensitive if needed. Also - what are the source(s) file paths for these events? This ... See more...
Hi @syedabuthahira  Are you able to give us an example of the junk you are seeing in the logs? Redact anything sensitive if needed. Also - what are the source(s) file paths for these events? This information will help us to answer your question more accurately. Please consider adding karma to this or any other answer if it has helped. Regards Will
Hi All.   I have noticed the lot of junk host values are reporting to the Search head. We are receiving the logs from the multiple OS to Splunk through the UF. we supposed to receive only the host ... See more...
Hi All.   I have noticed the lot of junk host values are reporting to the Search head. We are receiving the logs from the multiple OS to Splunk through the UF. we supposed to receive only the host name during the search but i have noticed lot of junk values reporting to the SH. As a part of troubleshooting, i have verified the raw logs in the UF and its not breaking and some how the logs are breaking in between the UF to indexer. Can you please assist me on this issue.
@livehybrid  : Thanks for the response.  The time frame is dynamic from time picker in the dashboard. I tried for last 60 mins and expanded time range as well. In all of the cases there is discrep... See more...
@livehybrid  : Thanks for the response.  The time frame is dynamic from time picker in the dashboard. I tried for last 60 mins and expanded time range as well. In all of the cases there is discrepancy. 
Hi @muhammadfahimma  I believe you may be experiencing a bug (BLUERIDGE-13575) which is a known issue with ES 8.0.2 (See https://docs.splunk.com/Documentation/ES/8.0.2/RN/KnownIssues) If this is th... See more...
Hi @muhammadfahimma  I believe you may be experiencing a bug (BLUERIDGE-13575) which is a known issue with ES 8.0.2 (See https://docs.splunk.com/Documentation/ES/8.0.2/RN/KnownIssues) If this is the issue then you may find the following workaround solves the issue until fixed in the product: Workaround: Remove `source` before sending to detection. add `| fields - source` to end of search Either way, I would suggest raising a support case, as even if it is this particular bug they will be able to associate it to your account and keep you updated with progress and resolution. Please let me know how you get on and consider adding karma to this or any other answer if it has helped. Regards Will
Hi @shabamichae , as also @isoutamo said: do only the requested things not anything else! Ciao. Giuseppe
Hi @Poojitha , two things: at first put all the search terms in the main search to have a more performant search: index="*test" sourcetype=aws:test host=testhost lvl IN (Error, Warn) source="*test... See more...
Hi @Poojitha , two things: at first put all the search terms in the main search to have a more performant search: index="*test" sourcetype=aws:test host=testhost lvl IN (Error, Warn) source="*testsource*" | stats count BY lvl | sort -count second thing: to compare two searches you have to use a defined time frame and never latest=now because in the meantime you could have new events, so run your search in a past timeframe (e.g. like @livehybrid hinted) or previous hour. Ciao. Giuseppe
Hi there,  how can i use stats command to one to one mapping between fields .  I have tried "list" function and "values" function both but results are not expected. Example: we are consolidating dat... See more...
Hi there,  how can i use stats command to one to one mapping between fields .  I have tried "list" function and "values" function both but results are not expected. Example: we are consolidating data from 2 indexes and both indexes have same fields of interests ( user, src_ip)  Base query:   index=okta or index=network | iplocation (src_ip) |stats values(src_ip) values(deviceName) values(City) values(Country) by user, index   Results: We get something like this user index src_ip DeviceName Country John_smith okta 10.0.0.1 192.178.2.24 laptop01 USA John_smith network 198.20.0.14 64.214.71.89 64.214.71.90 71.29.100.90 laptop01 laptop02 server01 My-CloudPC USA           Expected results: How to map which src_ip is coming from which Devicename?  We want to align the Devicename  in same sequence as per the src_ip ? If i use list instead of values in my stats,  it shows duplicates like this for src_ip and deviceName. Even doing a |dedup src_ip is not helping    Hope clear.
Hi @Poojitha  Can you confirm that you are running the search across the exact time frame? e.g. "Yesterday" If you run something like "Last 24 hours" then the actual timeframe will be different ea... See more...
Hi @Poojitha  Can you confirm that you are running the search across the exact time frame? e.g. "Yesterday" If you run something like "Last 24 hours" then the actual timeframe will be different each time you run it, this would explain why your values are slightly different when running the search versus viewing the dashboard. Please let me know how you get on and consider adding karma to this or any other answer if it has helped. Regards Will  
1. Splunk process is running on the server. 2. Configured the correct inputs under inputs.conf and outputs. conf   ###### OS Logs ###### [WinEventLog://Application] disabled = 0 start_from = ol... See more...
1. Splunk process is running on the server. 2. Configured the correct inputs under inputs.conf and outputs. conf   ###### OS Logs ###### [WinEventLog://Application] disabled = 0 start_from = oldest current_only = 0 checkpointInterval = 300 index = wineventlog renderXml=false [WinEventLog://Security] disabled = 0 start_from = oldest current_only = 0 evt_resolve_ad_obj = 1 checkpointInterval = 300 blacklist1 = EventCode="4662" Message="Object Type:\s+(?!groupPolicyContainer)" blacklist2 = EventCode="566" Message="Object Type:\s+(?!groupPolicyContainer)" blacklist3 = EventCode="5447" index = wineventlog renderXml=false