Hello @dc18 , There are plenty of apps on Splunkbase that can be used for visalizing AWS data. One of them is following: https://splunkbase.splunk.com/app/6311 Additionally, you can also check for ...
See more...
Hello @dc18 , There are plenty of apps on Splunkbase that can be used for visalizing AWS data. One of them is following: https://splunkbase.splunk.com/app/6311 Additionally, you can also check for AWS Content Pack that can also assist with similar purpose. Thanks, Tejas. --- If the above solution helps, an upvote is appreciated.
Hello community, I aim to compare the 'src_ip' referenced below with the CIDR IP ranges in the lookup file 'zscalerip.csv' using the query provided. If there is a match, the result should be recor...
See more...
Hello community, I aim to compare the 'src_ip' referenced below with the CIDR IP ranges in the lookup file 'zscalerip.csv' using the query provided. If there is a match, the result should be recorded as true in the 'Is_managed_device' field; otherwise, it should be marked as false. However, upon executing this query, I'm obtaining identical results for all IPs, irrespective of whether they match the CIDR range. I have created a new lookup definition for the lookup and implemented the following changes:-
Type = file-based
min_matches = 0
default_match = NONE
filename = zscalerip.csv
match_type = CIDR(CIDR)
CIDR IP range in lookup file :-
CIDR
168.246.*.*
8.25.203.0/24
64.74.126.64/26
70.39.159.0/24
136.226.158.0/23
Splunk Query :-
| makeresults | eval src_ip="10.0.0.0 166.226.118.0 136.226.158.0 185.46.212.0 2a03:eec0:1411::"
| makemv delim=" " src_ip
| mvexpand src_ip
| lookup zscalerip.csv CIDR AS src_ip OUTPUT CIDR as CIDR_match
| eval Is_managed_device=if(cidrmatch(CIDR_match,src_ip), "true", "false")
| table src_ip Is_managed_device
getting result in below format:-
src_ip
Is_managed_device
10.0.0.0
FALSE
166.226.118.0
FALSE
136.226.158.0
FALSE
185.46.212.0
FALSE
2a03:eec0:1411::
FALSE
I'm trying to change the font size of a table in a dashboard studio visualization. How is this done in the code? I've tried a few ways but having no luck. If yes, in which version we can increase...
See more...
I'm trying to change the font size of a table in a dashboard studio visualization. How is this done in the code? I've tried a few ways but having no luck. If yes, in which version we can increase the font size of a table. Thanks in advance and I appreciate the help.
Looking to build an interactive dashboard from csv file which contains timestamp. If we select last 7 days, am looking to filter 19th May to 13th May of data from this below sample table. Sample ...
See more...
Looking to build an interactive dashboard from csv file which contains timestamp. If we select last 7 days, am looking to filter 19th May to 13th May of data from this below sample table. Sample data: _time Index Sourcetype 19-05-2024 05:30 x y 18-05-2024 05:30 x y ... One of the input am planning is Time frame, so if i've to pass the token to the panels am trying to use |eval Time=relative_time(now(),"$time_tok$") which is not working as time token comes with earliest and latest timestamps. So, I've tried strptime to convert but still no luck over there. Can someone suggest a better way?
data ingesting via Universal forwarder rawdata on host : user reported no broke with data eg: persistuser with proper time stamp and currently defaul config in props.conf
i get Value in stanza [eventtype=snort3:alert:json] in /opt/splunk/etc/apps/TA_Snort3_json/default/tags.conf, line 1 not URL encoded: eventtype = snort3:alert:json my tags.conf contains [e...
See more...
i get Value in stanza [eventtype=snort3:alert:json] in /opt/splunk/etc/apps/TA_Snort3_json/default/tags.conf, line 1 not URL encoded: eventtype = snort3:alert:json my tags.conf contains [eventtype=snort3:alert:json]
ids = enabled
attack = enabled Any help appreciated im at a loss
We recently upgraded from 9.0.2 to 9.2.1 and started seeing some new errors on all indexer peer nodes as shown below. -------- 05-17-2024 14:35:07.225 +0000 ERROR DispatchCommandProcessor [949840...
See more...
We recently upgraded from 9.0.2 to 9.2.1 and started seeing some new errors on all indexer peer nodes as shown below. -------- 05-17-2024 14:35:07.225 +0000 ERROR DispatchCommandProcessor [949840 TcpChannelThread] - Search results may be incomplete, peer <indexer peer ip>'s search ended prematurely. Error = Peer <indexer peer hostname> will not return any results for this search, because the search head is using an outdated generation (search head gen_id=4626; peer gen_id=4969). This can be caused by the peer re-registering and the search head not yet updating to the latest generation. This should resolve itself shortly. -------- The master has logs like below. -------- splunkd.log.1:05-17-2024 12:06:59.491 +0000 WARN CMMaster [950487 CMMasterServiceThread] - got a large jump in gen_id suggestion=4921 current pending=1 reason=event=addPeerParallel Success guid=xxx adding_peers=7 -------- I tried suggestion actions from below discussion but no luck so far and ERROR is continuing for days now. https://community.splunk.com/t5/Splunk-Enterprise/Why-am-I-receiving-this-error-quot-The-search-head-is-using-an/td-p/599044 It looks like the problem is with the primary master as we could see that when switching to the standby master, the error goes away. Can anyone advise on this? What is a generation/gen_id and if there is a way to reset this to fix the issue?
@gcusello , I used quotes when I was trying different cases, with hope that maybe adding them might somehow solve my problem, haha! Anyways, I tried the last search that you provided :
index=fu...
See more...
@gcusello , I used quotes when I was trying different cases, with hope that maybe adding them might somehow solve my problem, haha! Anyways, I tried the last search that you provided :
index=fudo_index completed_action="deleted session." | stats values(user) AS user values(fudo_session) AS session values(completed_action) AS "completed action" count(completed_action) AS counter BY node_address | where counter>0 | rename node_address AS address
Unfortunately, it didn't help the situation. the $address$ is still not resolved. By the way, it does not matter whether I try with this new field that I extracted, or if I try with the $dest$ or $dvc$ that were parsed in my logs from the beginning, for some reason neither of them resolve in the notable title Do you have any other ideas what I can check in order to solve my issue? Cheers, splunky_diamond
Hi @splunky_diamond, probably this isn't the issue, but why do you use quotes? index=fudo_index completed_action="deleted session."
| stats
values(node_address) AS address
values(user) ...
See more...
Hi @splunky_diamond, probably this isn't the issue, but why do you use quotes? index=fudo_index completed_action="deleted session."
| stats
values(node_address) AS address
values(user) AS user
values(fudo_session) AS session
values(completed_action) AS "completed action"
count(completed_action) AS counter
| where counter>0 quots are mandatory when you have spaces or special chars in the field names. Then, why don't you use an aggregation key (the BY clause)? I'd try with something like this: index=fudo_index completed_action="deleted session."
| stats
values(user) AS user
values(fudo_session) AS session
values(completed_action) AS "completed action"
count(completed_action) AS counter
BY node_address
| where counter>0
| rename node_address AS address Ciao. Giuseppe
@gcusello , I tried your suggestion, it worked for the "fudo_session" field, thank you! However, I tried the same on "dvc" field and it does not work for some reason... I tried extracting new fi...
See more...
@gcusello , I tried your suggestion, it worked for the "fudo_session" field, thank you! However, I tried the same on "dvc" field and it does not work for some reason... I tried extracting new field called "node_address" and added it to my search in the following way: index=fudo_index completed_action="deleted session." | stats values("node_address") as address values("user") as user values("fudo_session") as session values("completed_action") as "completed action" count("completed_action") as counter | where 'counter'>0 And in the title of the notable I have the following: Deleted recorded session $session$ detected on $address$ Also I added both fields in the incident review settings as you said. Here is the result: The value that should appear instead of "$address$" is the IPv4 address. When I was extracting the field node_address, I did it in the enterprise security app in the search. For the permissions I made it global with everyone being able to read and only admin with write permissions (just like fudo_session field). If both of them are completely identical, why isn't this field getting evaluated like fudo_session? Could you please help with troubleshooting this?
Hi all I am ingesting k8s data with Opentelemetry in my enterprise environment. I would like to know if there is a list of available metrics and their description. Or if there is any example dash...
See more...
Hi all I am ingesting k8s data with Opentelemetry in my enterprise environment. I would like to know if there is a list of available metrics and their description. Or if there is any example dashboard that can help me to visualize the states and behaviors of clusters, pods, containers. I need to put order to show it to the different teams. Thanks and cheers JAR
Hi @LearningGuy , you can save your static data: in a csv lookup, in a kv-store lookup, in an Index, if you need time updates on your data. The more frequent approach is to use a csv lookup. ...
See more...
Hi @LearningGuy , you can save your static data: in a csv lookup, in a kv-store lookup, in an Index, if you need time updates on your data. The more frequent approach is to use a csv lookup. Ciao. Giuseppe
Hi @jacknguyen, yes, if you're speaking of frozen buckets, you don't need to save buckets from both the Indexers, but only one. Put attention to one thing: in an Indexer cluster, buckets are presen...
See more...
Hi @jacknguyen, yes, if you're speaking of frozen buckets, you don't need to save buckets from both the Indexers, but only one. Put attention to one thing: in an Indexer cluster, buckets are present in two parts: the part indexed by the same Indexer and the part indexed by the other Indexer and replicated on the first: you have to back-up both of them. Ciao. Giuseppe
Hi @splunky_diamond , I suppose that You are confusing passing some Correlation Search fields in the title of the CS itself (using a token) with the fields to display in an Incident review. The exa...
See more...
Hi @splunky_diamond , I suppose that You are confusing passing some Correlation Search fields in the title of the CS itself (using a token) with the fields to display in an Incident review. The example you gave is of the first type, but, if I correctly understand, you want to display other fields in the Notable information. to do this you must add these fields to the Correlation Search results (e.g. as values in the stats command), so that they are written in the Notable event and then, go to [Configure > Incident Review>Incident Settings] and add these fields to those displayed (if they were not already present). Ciao. Giuseppe
@VijaySrrie - I cannot tell by the name of the App who is the creator of the App, but you need to reach out to developer of that App and raise support case there.
@venkatramana - You can use Splunk SDK for Java. Below are references: https://dev.splunk.com/enterprise/docs/devtools/java/sdk-java https://github.com/splunk/splunk-sdk-java https://dev.splunk.c...
See more...
@venkatramana - You can use Splunk SDK for Java. Below are references: https://dev.splunk.com/enterprise/docs/devtools/java/sdk-java https://github.com/splunk/splunk-sdk-java https://dev.splunk.com/enterprise/docs/devtools/java/sdk-java/gettingstartedsdkjava/installsdkjava/ I hope this helps!!! If it does kindly upvote!!!