Thanks for your answer. Running it like you provided it (plus adding the timewrap I think you forgot) It doesn't provide any output, I removed the last where just to troubleshoot it. I tried replacin...
See more...
Thanks for your answer. Running it like you provided it (plus adding the timewrap I think you forgot) It doesn't provide any output, I removed the last where just to troubleshoot it. I tried replacing your s0 and s1like this but it but again output is empty. | tstats prestats=t `summariesonly` count from datamodel="Web" where sourcetype="f5:bigip:ltm:http:irule" by _time Web.site span=10m
| timechart span=10m count as event_count by Web.site useother=false limit=0
|timewrap 1w
| foreach *_
[ eval <<MATCHSTR>>_combined=<<MATCHSTR>>_latest_week."|".<<MATCHSTR>>_1week_before_week ]
| fields _time *_combined
| untable _time Web.series values
| eval values=split(values,"|")
| eval old=mvindex(values,0), new=mvindex(values,1)
| fields - values It looks like a complex workaround for something that sounds like a pretty standard use case to me. Do you know any other simpler way of doing this? Thanks again!
Your license only measures how much data you're ingesting daily. (or how much compute power you use on indexing and search tiers but that's a relatively uncommon scenario). Splunk doesn't care how ma...
See more...
Your license only measures how much data you're ingesting daily. (or how much compute power you use on indexing and search tiers but that's a relatively uncommon scenario). Splunk doesn't care how many additional components you have. In some specific scenarios (like detached environment) you might need a no-ingest license for forwarders. Question is what are you doing on HFs - are you running any modular inputs on them or is it just a parsing layer before indexers? With modular inputs the critical item is input's state because what you don't want is that during failover you ingest all the data from the start again. Deployment server is a bit easier since DS serves mostly "static" content. There are a few scenarios of HA installations covered by Core Services Implementation course. - either a parent/children situation or a sibling replication. And with relatively new Splunk release you can also create a clustered DS setup https://docs.splunk.com/Documentation/Splunk/latest/Updating/Implementascalabledeploymentserversolution
Perhaps if you shared your actual events (anonymised as little as possible of course), we might be able to give more useful advise - as it stands, a generic question will usually get a generic respon...
See more...
Perhaps if you shared your actual events (anonymised as little as possible of course), we might be able to give more useful advise - as it stands, a generic question will usually get a generic response!
Hello, This looks like JSON of sorts - have you considered treating it as such? - Not sure how to implement it. | rex mode=sed "s/\"Felid\d\"://g" - how do we implement for multiple fields li...
See more...
Hello, This looks like JSON of sorts - have you considered treating it as such? - Not sure how to implement it. | rex mode=sed "s/\"Felid\d\"://g" - how do we implement for multiple fields like Feild1, Field 2 etc?
Username | count _username | src | src_count
root | 102 | 168.172.1.1 | 132
admin | 71 | 10.10.0.1 | 60
yara | 34 | 168.0.8.1 | 12
And if there is more fields search for the top three fields...
See more...
Username | count _username | src | src_count
root | 102 | 168.172.1.1 | 132
admin | 71 | 10.10.0.1 | 60
yara | 34 | 168.0.8.1 | 12
And if there is more fields search for the top three fields with the top three values
I am trying to install the Splunk App for SOAR and Splunk App for SOAR Export , however facing the issue as below I am using the soar_local_admin user account and add this user to phantom role a...
See more...
I am trying to install the Splunk App for SOAR and Splunk App for SOAR Export , however facing the issue as below I am using the soar_local_admin user account and add this user to phantom role as well. still the same. Any suggestion will be highly appreciated.
I checked all of them independently and they're all empty. Running the search in the format you've put throws error : Error in 'inputlookup' command: This command must be the first command of a sear...
See more...
I checked all of them independently and they're all empty. Running the search in the format you've put throws error : Error in 'inputlookup' command: This command must be the first command of a search. Which I think is valid as it starts with "inputlookup" NOTE: Before update, notables were created successfully, so my notables index had data. In order to check if there was any problem with the index itself, I exported notables into CSV files (exporttool) and removed notables index and recreated them.
I'm really not an expert in datagrams, but according my observation behavior, it is not true If each datagram is separate event, it's not possible to see the same behavior because with "SHOULD_LINME...
See more...
I'm really not an expert in datagrams, but according my observation behavior, it is not true If each datagram is separate event, it's not possible to see the same behavior because with "SHOULD_LINMERGE = false" events can be defined without LINE_BREAKER >each datagram is treated as separate event Returning to the _indextime versus _teme - it's just an addition in my case, because if your log rate is pretty low, you can see in realtime how events showing in search only after the next one
This looks like JSON of sorts - have you considered treating it as such? In the meantime, you could use rex mode=sed | rex mode=sed "s/\"Felid\d\"://g"
As a Splunk newcomer, I need guidance on using Splunk effectively to send logs to a Disaster Recovery (DR) environment where I have one Heavy Forwarder (HF) and one Deployment Server (DS) on-premises...
See more...
As a Splunk newcomer, I need guidance on using Splunk effectively to send logs to a Disaster Recovery (DR) environment where I have one Heavy Forwarder (HF) and one Deployment Server (DS) on-premises. What steps should I take with my HF and DS to ensure smooth log ingestion into the DR Splunk Cloud instance? I have considered replicate vm ( HF and DS) as a possible solution, but I am still not sure about the best approach. Please advise on the following: - Are there any specific licensing requirements or restrictions for replicating Splunk instances? - What are the potential performance implications of replicating a Splunk VM, especially considering the data volume and real-time or near real-time requirements? - Are there any recommended best practices or configurations for replicating HF and DS VMs to a DR environment?" Thank for your help.
Thank you for your response. I understand that using a dedicated syslog server is the best practice, but until this moment I haven't understood with which errors I can come across without it. I tes...
See more...
Thank you for your response. I understand that using a dedicated syslog server is the best practice, but until this moment I haven't understood with which errors I can come across without it. I tested your props.conf suggestion but still observe the same behavior that was described in the OP.
Hi, Check for below, if you get no results, then we can check further. index=notable | inputlookup es_notable_events | inputlookup incident_review_lookup
Hi,
I'm working with .NET and using the 'services/search/jobs/' API. After successfully connecting through the 'services/auth/logi'n API, I receive a SessionKey, which I add to the headers for sub...
See more...
Hi,
I'm working with .NET and using the 'services/search/jobs/' API. After successfully connecting through the 'services/auth/logi'n API, I receive a SessionKey, which I add to the headers for subsequent requests as follows:
oRequest.Headers.Authorization = new AuthenticationHeaderValue("Splunk", connectionInfo.AccessToken);
When I received 401 error code after called 'services/search/jobs/' , I attempt to reconnect by calling 'services/auth/login' up to three times to retrieve a new session key and update the header accordingly. Despite this, the session key sometimes remains unchanged (is this expected behavior?), and regardless of whether the token changes or not, I continue to receive the 401 Unauthorized error:
Response: '<?xml version="1.0" encoding="UTF-8"?>
<response>
<messages>
<msg type="WARN">call not properly authenticated</msg>
</messages>
</response>
'
Error from System.Net.Http: System.Net.Http.HttpRequestException: Response status code does not indicate success: 401 (Unauthorized).
The URL I'm using starts with https and the port is 8089. Can you assist with this issue?
I actually did not know about KVStore and the mongodb instance behind it. So I did a little bit of research and try-and-error on that. Disabled KVStore in Splunk and all of the forms and dashboards r...
See more...
I actually did not know about KVStore and the mongodb instance behind it. So I did a little bit of research and try-and-error on that. Disabled KVStore in Splunk and all of the forms and dashboards related to notables and incident review stopped working (threw and error regarding the dashboard not available) So there should be a direct relation between two. Enabled KV and everything went back to normal (except I still have no notables stored) I've been trying to look for issues in mongo logs but nothing so far. Can you please direct me towards other possible places in KVStore (or similar) to look and investigate?
Dear All, We have splunk index with data like pattern and the pattern was recently changed. {"Feild1":"DATA1","Feild2":"DATA2","Feild3":"DATA3","Feild4":"DATA4"} We have several dashboards using p...
See more...
Dear All, We have splunk index with data like pattern and the pattern was recently changed. {"Feild1":"DATA1","Feild2":"DATA2","Feild3":"DATA3","Feild4":"DATA4"} We have several dashboards using previous data pattern like below. DATA1,DATA2,DATA3,DATA4 Looking for a way to filter out or suppress {"Feild1": "Feild2":.....} using splunk query's and feed output to dashboards. Kindly suggest how this can be done. Thanks
Hi @KhalidAlharthi , ok, it shouldn't be a resource issue . The only possibility is the throughput of the disks, that you can check only with an external tool like Bonnie++. Could you check the re...
See more...
Hi @KhalidAlharthi , ok, it shouldn't be a resource issue . The only possibility is the throughput of the disks, that you can check only with an external tool like Bonnie++. Could you check the resources of your indexers using the Monitoring Console? Please check if the resources are fully used. Then, you could try to configure the parallel pipeline on your indexers, for more infos see at https://docs.splunk.com/Documentation/Splunk/9.3.0/Indexer/Pipelinesets you could try to use the value parallelIngestionPipelines = 2 in the General stanza of server.conf, in this way you better use your hardware resources. Ciao. Giuseppe
Hi @Siddharthnegi , your savedesearch is an alert or a Report? start to search it in the specific page. Then, maybe the savedsearches is a private one and it's visible only ny the owner. Second c...
See more...
Hi @Siddharthnegi , your savedesearch is an alert or a Report? start to search it in the specific page. Then, maybe the savedsearches is a private one and it's visible only ny the owner. Second choice could be that you aren't in the app where it's located. If not check in savedsearches.conf files where it's located and maybe it's saved with a different name. If you don't find it, are you sure that you saved it? Ciao. Giuseppe
True. Sometimes users don't have permissions to run their own crons and the system-wide crontab is fixed. That can be problematic here. Anyway. The (ugly) walkaround to the issue with spawning such ...
See more...
True. Sometimes users don't have permissions to run their own crons and the system-wide crontab is fixed. That can be problematic here. Anyway. The (ugly) walkaround to the issue with spawning such stuff from within Splunk itself would be to simply create multiple inputs. If you want to spawn 2-minute long jobs every minute, you can just create two (or better yet - three so that you're sure there's no voverlap) separate inputs. One is running */3. Another one is 1,4,7,10..., and another one is 2,5,8,11... Ugly, but should work.
Hi , I have a saved search which is cron scheduled , but it is not showing on the saved search panel . (setting->Searches,report and alerts) what could be the reason.