All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Your license only measures how much data you're ingesting daily. (or how much compute power you use on indexing and search tiers but that's a relatively uncommon scenario). Splunk doesn't care how ma... See more...
Your license only measures how much data you're ingesting daily. (or how much compute power you use on indexing and search tiers but that's a relatively uncommon scenario). Splunk doesn't care how many additional components you have. In some specific scenarios (like detached environment) you might need a no-ingest license for forwarders. Question is what are you doing on HFs - are you running any modular inputs on them or is it just a parsing layer before indexers? With modular inputs the critical item is input's state because what you don't want is that during failover you ingest all the data from the start again. Deployment server is a bit easier since DS serves mostly "static" content. There are a few scenarios of HA installations covered by Core Services Implementation course. - either a parent/children situation or a sibling replication. And with relatively new Splunk release you can also create a clustered DS setup https://docs.splunk.com/Documentation/Splunk/latest/Updating/Implementascalabledeploymentserversolution
Perhaps if you shared your actual events (anonymised as little as possible of course), we might be able to give more useful advise - as it stands, a generic question will usually get a generic respon... See more...
Perhaps if you shared your actual events (anonymised as little as possible of course), we might be able to give more useful advise - as it stands, a generic question will usually get a generic response! 
Hello,   This looks like JSON of sorts - have you considered treating it as such? - Not sure how to implement it.   | rex mode=sed "s/\"Felid\d\"://g" - how do we implement for multiple fields li... See more...
Hello,   This looks like JSON of sorts - have you considered treating it as such? - Not sure how to implement it.   | rex mode=sed "s/\"Felid\d\"://g" - how do we implement for multiple fields like Feild1, Field 2 etc?
Username | count _username | src | src_count root | 102 | 168.172.1.1 | 132 admin | 71 | 10.10.0.1 | 60 yara | 34 | 168.0.8.1 | 12   And if there is more fields search for the top three fields... See more...
Username | count _username | src | src_count root | 102 | 168.172.1.1 | 132 admin | 71 | 10.10.0.1 | 60 yara | 34 | 168.0.8.1 | 12   And if there is more fields search for the top three fields with the top three values
I am trying to install the Splunk App for SOAR and Splunk App for SOAR Export , however facing the issue as below I am using the soar_local_admin user account and add this user to phantom role a... See more...
I am trying to install the Splunk App for SOAR and Splunk App for SOAR Export , however facing the issue as below I am using the soar_local_admin user account and add this user to phantom role as well. still the same. Any suggestion will be highly appreciated.  
I checked all of them independently and they're all empty. Running the search in the format you've put throws error : Error in 'inputlookup' command: This command must be the first command of a sear... See more...
I checked all of them independently and they're all empty. Running the search in the format you've put throws error : Error in 'inputlookup' command: This command must be the first command of a search.  Which I think is valid as it starts with "inputlookup" NOTE: Before update, notables were created successfully, so my notables index had data. In order to check if there was any problem with the index itself, I exported notables into CSV files (exporttool) and removed notables index and recreated them.
I'm really not an expert in datagrams, but according my observation behavior, it is not true If each datagram is separate event, it's not possible to see the same behavior because with "SHOULD_LINME... See more...
I'm really not an expert in datagrams, but according my observation behavior, it is not true If each datagram is separate event, it's not possible to see the same behavior because with "SHOULD_LINMERGE = false" events can be defined without LINE_BREAKER >each datagram is treated as separate event Returning to the _indextime versus _teme - it's just an addition in my case, because if your log rate is pretty low, you can see in realtime how events showing in search only after the next one
This looks like JSON of sorts - have you considered treating it as such? In the meantime, you could use rex mode=sed | rex mode=sed "s/\"Felid\d\"://g"
As a Splunk newcomer, I need guidance on using Splunk effectively to send logs to a Disaster Recovery (DR) environment where I have one Heavy Forwarder (HF) and one Deployment Server (DS) on-premises... See more...
As a Splunk newcomer, I need guidance on using Splunk effectively to send logs to a Disaster Recovery (DR) environment where I have one Heavy Forwarder (HF) and one Deployment Server (DS) on-premises. What steps should I take with my HF and DS to ensure smooth log ingestion into the DR Splunk Cloud instance? I have considered replicate vm ( HF and DS) as a possible solution, but I am still not sure about the best approach. Please advise on the following: - Are there any specific licensing requirements or restrictions for replicating Splunk instances? - What are the potential performance implications of replicating a Splunk VM, especially considering the data volume and real-time or near real-time requirements? - Are there any recommended best practices or configurations for replicating HF and DS VMs to a DR environment?" Thank for your help.
Thank you for your response. I understand that using a dedicated syslog server is the best practice, but until this moment I haven't understood with which errors I can come across without it. I tes... See more...
Thank you for your response. I understand that using a dedicated syslog server is the best practice, but until this moment I haven't understood with which errors I can come across without it. I tested your props.conf suggestion but still observe the same behavior that was described in the OP.  
Hi,   Check for below, if you get no results, then we can check further. index=notable | inputlookup es_notable_events | inputlookup incident_review_lookup    
Hi, I'm working with .NET and using the 'services/search/jobs/' API. After successfully connecting through the 'services/auth/logi'n API, I receive a SessionKey, which I add to the headers for sub... See more...
Hi, I'm working with .NET and using the 'services/search/jobs/' API. After successfully connecting through the 'services/auth/logi'n API, I receive a SessionKey, which I add to the headers for subsequent requests as follows: oRequest.Headers.Authorization = new AuthenticationHeaderValue("Splunk", connectionInfo.AccessToken); When I received 401 error code after called 'services/search/jobs/' , I attempt to reconnect by calling 'services/auth/login' up to three times to retrieve a new session key and update the header accordingly. Despite this, the session key sometimes remains unchanged (is this expected behavior?), and regardless of whether the token changes or not, I continue to receive the 401 Unauthorized error: Response: '<?xml version="1.0" encoding="UTF-8"?> <response> <messages> <msg type="WARN">call not properly authenticated</msg> </messages> </response> ' Error from System.Net.Http: System.Net.Http.HttpRequestException: Response status code does not indicate success: 401 (Unauthorized). The URL I'm using starts with https and the port is 8089. Can you assist with this issue?
I actually did not know about KVStore and the mongodb instance behind it. So I did a little bit of research and try-and-error on that. Disabled KVStore in Splunk and all of the forms and dashboards r... See more...
I actually did not know about KVStore and the mongodb instance behind it. So I did a little bit of research and try-and-error on that. Disabled KVStore in Splunk and all of the forms and dashboards related to notables and incident review stopped working (threw and error regarding the dashboard not available) So there should be a direct relation between two. Enabled KV and everything went back to normal (except I still have no notables stored) I've been trying to look for issues in mongo logs but nothing so far. Can you please direct me towards other possible places in KVStore (or similar) to look and investigate? 
Dear All, We have splunk index with data like pattern and the pattern was recently changed. {"Feild1":"DATA1","Feild2":"DATA2","Feild3":"DATA3","Feild4":"DATA4"} We have several dashboards using p... See more...
Dear All, We have splunk index with data like pattern and the pattern was recently changed. {"Feild1":"DATA1","Feild2":"DATA2","Feild3":"DATA3","Feild4":"DATA4"} We have several dashboards using previous data pattern like below. DATA1,DATA2,DATA3,DATA4 Looking for a way to filter out or suppress {"Feild1": "Feild2":.....} using splunk query's and feed output to dashboards.   Kindly suggest how this can be done.   Thanks  
Hi @KhalidAlharthi , ok, it shouldn't be a resource issue . The only possibility is the throughput of the disks, that you can check only with an external tool like Bonnie++. Could you check the re... See more...
Hi @KhalidAlharthi , ok, it shouldn't be a resource issue . The only possibility is the throughput of the disks, that you can check only with an external tool like Bonnie++. Could you check the resources of your indexers using the Monitoring Console? Please check if the resources are fully used. Then, you could try to configure the parallel pipeline on your indexers, for more infos see at https://docs.splunk.com/Documentation/Splunk/9.3.0/Indexer/Pipelinesets you could try to use the value  parallelIngestionPipelines = 2 in the General stanza of server.conf, in this way you better use your hardware resources. Ciao. Giuseppe
Hi @Siddharthnegi , your savedesearch is an alert or a Report? start to search it in the specific page. Then, maybe the savedsearches is a private one and it's visible only ny the owner. Second c... See more...
Hi @Siddharthnegi , your savedesearch is an alert or a Report? start to search it in the specific page. Then, maybe the savedsearches is a private one and it's visible only ny the owner. Second choice could be that you aren't in the app where it's located. If not check in savedsearches.conf files where it's located and maybe it's saved with a different name. If you don't find it, are you sure that you saved it? Ciao. Giuseppe
True. Sometimes users don't have permissions to run their own crons and the system-wide crontab is fixed. That can be problematic here. Anyway. The (ugly) walkaround to the issue with spawning such ... See more...
True. Sometimes users don't have permissions to run their own crons and the system-wide crontab is fixed. That can be problematic here. Anyway. The (ugly) walkaround to the issue with spawning such stuff from within Splunk itself would be to simply create multiple inputs. If you want to spawn 2-minute long jobs every minute, you can just create two (or better yet - three so that you're sure there's no voverlap) separate inputs. One is running */3. Another one is 1,4,7,10..., and another one is 2,5,8,11... Ugly, but should work.
You might not have permissions to see it.
Hi , I have a saved search which is cron scheduled , but it is not showing on the saved search panel . (setting->Searches,report and alerts) what could be the reason.
    Anyway, a question to - why the need to delay the script's output in the first place? It seems a very unusual requirement.     ... originally the script used a "timeout [variable_secs]" lau... See more...
    Anyway, a question to - why the need to delay the script's output in the first place? It seems a very unusual requirement.     ... originally the script used a "timeout [variable_secs]" launching a while loop to wait with a "test -f file" if "file" was generated during the STARTTIME untill the next "timeout [variable_secs]" (variable_secs is taken by a table, every file has it's own variable_secs). If timeout exits with exit_code 124, an stdout + log entry is written in a log file. If a new script is launched with same identical args, itsself check if a previous one is still running and exit suddenly, waiting the previous to do its job. So i have a single input entry for each file in table (file exists or file does not exists after start_time [the table also have its start_time variable for every file] + variable_secs. During the "variable_secs" if i had a new file in table to check, the script is blocked from the previously, so i can't check it. Let's say, having a table like this server1 /tmp/flow.log 07:00 07:30 server1 /tmp/flow2.log 07:10 07:15   Scheduled script run by splunkd is scheduled every 5m. Let's say now it's 06:55, 06:55 splunkd run script, it exits with no output/file log since check it's not 07:00 OR 07:10 (script does this) 07:00 splunkd run script, task starts for "/tmp/flow.log" and wait untill 07:30 for file generation 07:05 splunkd run script, aborted since the previous 07:00 is still in background and running 07:10 same as 07:05, "/tmp/flow2.log" is skipped 07:15 same as 07:10, "/tmp/flow2.log" is skipped 07:20 same as 06:55 ... So "/tmp/flow2.log" is totally skipped. Now, on some servers, as said, a cron was used. On other servers i rewrite the script without the timeout/sleep, and write an entry every 5m with a variable "FOUND=[0|1]", and then stating with SPL a "stats count sum(FOUND) as found by host,file" and with some dashboards/alerts who traces them, a sum of 0 in that timerange is a file not present.