Hi @Nicolas2203, checkpoints are managed in different ways (e.g. DB-Connect uses a kv-store table), so you have to understand what's the repository of your checkpoints and you have to align between ...
See more...
Hi @Nicolas2203, checkpoints are managed in different ways (e.g. DB-Connect uses a kv-store table), so you have to understand what's the repository of your checkpoints and you have to align between HFs using a scheduled script that copies configurations and checkpoints, so the HFs will be aligned to the last run of the script. ciao. Giuseppe
Hello @gcusello Thanks for the answer. Ok I understand, I will install the app on both HF and just activate it on one. When you say yu have to manage checkpoints between HFs. How is t...
See more...
Hello @gcusello Thanks for the answer. Ok I understand, I will install the app on both HF and just activate it on one. When you say yu have to manage checkpoints between HFs. How is that possible in Splunk ? Assuming that logs are stored on the source for 2 weeks in case of an outage, when I activate log collection on the second HF, it will start collecting logs from the day it is activated, and it won't be aware of the logs already ingested into Splunk?
I'm thinking either an external lookup or a custom search command. But what confuses me here is that you're talking about a "file". What file do you have in mind?
Hi @neerajs_81 , using Classic Dashboards, you can only put the input boxes on the same row on top of the dashboard; if you want a different position, you have to use Dashboard Studio. Ciao. Giuse...
See more...
Hi @neerajs_81 , using Classic Dashboards, you can only put the input boxes on the same row on top of the dashboard; if you want a different position, you have to use Dashboard Studio. Ciao. Giuseppe
There is an easier way. index=myindex RecordType=abc DML_Action=INSERT earliest=-4d
| bin _time span=1d
| stats sum(numRows) as count by _time,table_Name
| sort 0 +_time -count
| dedup 10 _time
Hi, i am using classic dashboard. I have below 2 INPUT boxes ( SRC_Condition and Source IP) to filter the src_ip. By default, we can only place input boxes next to one another. How can i align th...
See more...
Hi, i am using classic dashboard. I have below 2 INPUT boxes ( SRC_Condition and Source IP) to filter the src_ip. By default, we can only place input boxes next to one another. How can i align these 2 on top of one another ? Splunk doesn't allow us to drag and drop them on top of each other.
Hi, I'm trying to drilldown on a table using two different input values (from two radio button inputs). When I have input from one radio button, it works all fine. For eg, if I have this statement ...
See more...
Hi, I'm trying to drilldown on a table using two different input values (from two radio button inputs). When I have input from one radio button, it works all fine. For eg, if I have this statement in drilldown tag of table it works perfectly: <set token="tokenNode">$click.value$</set> However, when I place second set token statements It just says No Results Found: I tried both click.value & click.value2 Option 1: <set token="tokenNode">$click.value$</set> <set token="tokenSwitch">$click.value$</set> Option 2: <set token="tokenNode">$click.value$</set> <set token="tokenSwitch">$click.value2$</set>
Thanks @ITWhisperer This helps. Iam going to read more about streamstats command now.My desired output is as below where am trying to see daily % growth of data.For eg. The green colored table is the...
See more...
Thanks @ITWhisperer This helps. Iam going to read more about streamstats command now.My desired output is as below where am trying to see daily % growth of data.For eg. The green colored table is the output I got from your modified splunk.I want to generate output as per other table where "daily % growth" (for each table in a date) formula is (120-100)/100 rounded to 0 as percentage output. Is this something which can be achieved?
It might be worth taking a look at the feature Business Workflows. It's possible they may align with your use-case. https://docs.splunk.com/observability/en/apm/workflows/workflows.html
OK, here is what I found. The proxy env variables can't be set in inputs.conf because they are not included in the inputs.conf.spec. If you want to try a different approach, you might be able to se...
See more...
OK, here is what I found. The proxy env variables can't be set in inputs.conf because they are not included in the inputs.conf.spec. If you want to try a different approach, you might be able to set the proxy env variables in the startup script for the collector. This is not a supported config, but could be worth a try to see if it has the desired effect--and maybe it will lead to other ideas/solutions. For example, if I was running this on a Linux host, I could try setting HTTPS_PROXY in /opt/splunkforwarder/etc/apps/Splunk_TA_otel/linux_x86_64/bin/Splunk_TA_otel.sh (e.g., export HTTPS_PROXY=http://my-proxy:8080 )
It looks strange but I'm no expert on Cloud. Are you sure it isn't about visualization only? Anyway, you can probably emulate your relatively simple timechart with either simple bin | stats by _tim...
See more...
It looks strange but I'm no expert on Cloud. Are you sure it isn't about visualization only? Anyway, you can probably emulate your relatively simple timechart with either simple bin | stats by _time or several passes with streamstats
Well, 1s span for three days is indeed quite a lot of results but I don't see a problem with that. A run-anywhere example | makeresults count=3000000 | streamstats count | eval _time=_time-count...
See more...
Well, 1s span for three days is indeed quite a lot of results but I don't see a problem with that. A run-anywhere example | makeresults count=3000000 | streamstats count | eval _time=_time-count/10 | eval _time=_time+((random()%10-5)) | timechart span=1s count What version are you using? EDIT: OK, I read days where you wanted months. Still it's less than 8 million rows. It might be a bit performance-intensive but Splunk should manage provided you have enough memory. And to limit memory usage, remove the raw event value as early as possible. So <your initial search> | fields - _raw | timechart ...
Hi Splunk Experts, Can you please let me know how we can calculate the max and avg TPS for a time period of last 3 months along with the exact time of occurrence. I came up with below query, ...
See more...
Hi Splunk Experts, Can you please let me know how we can calculate the max and avg TPS for a time period of last 3 months along with the exact time of occurrence. I came up with below query, but it is showing me error as the count of event is greater than 50000. Can anyone please help or guide me on how to overcome this issue. index=XXX "attrs"=traffic NOT metas | timechart span=1s count AS TPS | eventstats max(TPS) as MAX_TPS | eval Peak_Time=if(MAX_TPS==TPS,_time,null()) | stats avg(TPS) as AVG_TPS first(MAX_TPS) as MAX_TPS first(Peak_Time) as Peak_Time | fieldformat Peak_Time=strftime(Peak_Time,"%x %X")
I have below splunk which gives result of top 10 only for a particular day and I know the reason why too. How can I tweak it to get top 10 for each date i.e. If I run the splunk on 14-Oct, the output...
See more...
I have below splunk which gives result of top 10 only for a particular day and I know the reason why too. How can I tweak it to get top 10 for each date i.e. If I run the splunk on 14-Oct, the output must include 10-Oct, 11-Oct, 12.-Oct and 13-Oct each with top 10 table names with highest insert sum index=myindex RecordType=abc DML_Action=INSERT earliest=-4d
| bin _time span=1d
| stats sum(numRows) as count by _time,table_Name
| sort limit=10 +_time -count Thanks in advance
I suppose you're talking about Proofpoint Secure Access (formerly Zero Trust Network Access, formerly Proofpoint Meta). I doubt that you're gonna find anything relevant. Firstly, it's not a very pop...
See more...
I suppose you're talking about Proofpoint Secure Access (formerly Zero Trust Network Access, formerly Proofpoint Meta). I doubt that you're gonna find anything relevant. Firstly, it's not a very popular soultion, secondly, it's a cloud-based service so you'll most probably need some API-pulling modular input (maybe there's some on-prem component but I didn't touch the stuff so I have no experience here). And thirdly - it's getting retired at the end of 2024.
Indeed there is no direct app for it on Splunkbase, even if you look through the archive. Do you have any logging settings in the Proofpoint VPN interface, or any specific API documentation on the VP...
See more...
Indeed there is no direct app for it on Splunkbase, even if you look through the archive. Do you have any logging settings in the Proofpoint VPN interface, or any specific API documentation on the VPN service of Proofpoint?
The log I provided was just a sample set to show what I am searching. So, if I search for just "View Refresh" for a duration of 1 hour, I see 4 sets of events - i.e 4 entries of "start" and "end...
See more...
The log I provided was just a sample set to show what I am searching. So, if I search for just "View Refresh" for a duration of 1 hour, I see 4 sets of events - i.e 4 entries of "start" and "end" of each. To underlying my commandments: Illustrate data input (in raw text, anonymize as needed), whether they are raw events or output from a search (SPL that volunteers here do not have to look at). Illustrate the desired output from illustrated data. If volunteers do not see actual data (4 sets of events), how can we tell why you do not get desired results (4 durations)?