HI @gcusello , I tried it using the way you suggested, it is working while uploading sample log however the same config is not working on live data. Here is the props.conf
[ __auto__learned__ ...
See more...
HI @gcusello , I tried it using the way you suggested, it is working while uploading sample log however the same config is not working on live data. Here is the props.conf
[ __auto__learned__ ]
SHOULD_LINEMERGE=false
LINE_BREAKER=([\r\n]*){\"event\"\:\{\"
NO_BINARY_CHECK=true
Hello no start time is in this format 2024-05-20T04:00:53.847Z and after the eval the result is the same 2024-05-20T04:00:53.847Z ! how to put on epoch time and transform it? thanks Laurent
Yes i copy pasted the same payLoadInterface into csv file.But i dont know why is not coming .And how to check the values from lookup file is getting populated The values like DSR_TEST,DSR_TEST1,D...
See more...
Yes i copy pasted the same payLoadInterface into csv file.But i dont know why is not coming .And how to check the values from lookup file is getting populated The values like DSR_TEST,DSR_TEST1,DSR_TEST2
With polkit versions 0.120 and below, the version number was structured with a major/minor format always using the major version of 0. It appears that Splunk was using that dot between them to decode...
See more...
With polkit versions 0.120 and below, the version number was structured with a major/minor format always using the major version of 0. It appears that Splunk was using that dot between them to decode the version number in its create-polkit-rules option to detect whether the older PKLA file or the newer JS version would be supported. Starting in polkit version 121, the maintainers of polkit have dropped the "0." major number and started using the minor version as the major version. Because of this, Splunk does not currently seem to be able to deploy its own polkit rules. This affects both RHEL 9 and Ubuntu 24.04 so far in my testing. Has anyone else run into this issue or have another workaround for it? Thanks! root@dev2404-1:~# pkcheck --version
pkcheck version 124
root@dev2404-1:~# apt-cache policy polkitd
polkitd:
Installed: 124-2ubuntu1
Candidate: 124-2ubuntu1
Version table:
*** 124-2ubuntu1 500
500 http://archive.ubuntu.com/ubuntu noble/main amd64 Packages
100 /var/lib/dpkg/status
root@dev2404-1:~# /opt/splunk/bin/splunk version
Splunk 9.2.1 (build 78803f08aabb)
root@dev2404-1:~# /opt/splunk/bin/splunk enable boot-start -user splunk -systemd-managed 1 -create-polkit-rules 1
"
": unable to parse Polkit major version: '.' separator not found.
^C
root@dev2404-1:~# https://github.com/polkit-org/polkit/tags
Hi @karthi2809, do you have in payLoadInterface the same values "aaa", "bbb", "ccc" ? if yes, you can join the Link to the events, otherwise, it isn't possible. Ciao. Giuseppe
Hi @gcusello This my lookup InterfaceName Link DSR_TEST https://docs.splunk.com/Documentation/Splunk/9.2.1/SearchReference/Lookup?_gl=1*1w7wkaf*_ga*MTYzMTg2Njc5NC4xNzExOTgxMTg4*_ga_G...
See more...
Hi @gcusello This my lookup InterfaceName Link DSR_TEST https://docs.splunk.com/Documentation/Splunk/9.2.1/SearchReference/Lookup?_gl=1*1w7wkaf*_ga*MTYzMTg2Njc5NC4xNzExOTgxMTg4*_ga_GS7YF8S63Y*MTcxNjM4NTE1Ni41OS4xLjE3MTYzODYxMDMuNTYuMC4w*_ga_5EPM2P39FV*MTcxNjM4NTE1Ni4xNTcuMS4xNzE2Mzg2MTAzLjAuMC4zMjM3MzE2MTE.&_ga=2.25230836.839088300.1716203378-1631866794.1711981188 DSR_TEST1 https://community.splunk.com/t5/Splunk-Search/How-to-Combine-search-query-with-a-lookup-file-with-one-common/td-p/296885?sort=votes DSR_TEST2 https://docs.splunk.com/Documentation/Splunk/9.2.1/SearchReference/Gauge
I'm more familiar with the ServiceNow side of things, but in the alert action, there's a Custom Fields section. You can add additional fields there, eg description=[whatever info you want to pass f...
See more...
I'm more familiar with the ServiceNow side of things, but in the alert action, there's a Custom Fields section. You can add additional fields there, eg description=[whatever info you want to pass from Splunk] On the ServiceNow side, you'll have to tweak the Transform Map to map the Description field over from the import set table that incidents are originally created on, to the actual Incident table in ServiceNow. I don't know why description isn't included OOTB, seems like a pretty useful field to populate...
Hi @pm2012 , the LINE_BREAKING isn't correct. download a sample of your data in a text file and use it in the guided procedure [Settings > Add data] in this way, you can find the correct sourcetyp...
See more...
Hi @pm2012 , the LINE_BREAKING isn't correct. download a sample of your data in a text file and use it in the guided procedure [Settings > Add data] in this way, you can find the correct sourcetype definitions to use to parse your data. Ciao. Giuseppe
We apparently have the StreamWeaver integration in place, but we are not sure how it was implemented as the folks who did it are no longer around. How is it done usually? Is it a REST API integra...
See more...
We apparently have the StreamWeaver integration in place, but we are not sure how it was implemented as the folks who did it are no longer around. How is it done usually? Is it a REST API integration? as I see at Connect: Splunk Enterprise
We have this stood up and working...sort of. Splunk Admins can configure alerts to add the "ServiceNow Incident Integration" action, and we can create Incidents in Splunk. The problem is, we have a...
See more...
We have this stood up and working...sort of. Splunk Admins can configure alerts to add the "ServiceNow Incident Integration" action, and we can create Incidents in Splunk. The problem is, we have a lot of development teams that create/maintain their own alerts in Splunk. When they go to add this action, they're not able to select the account to use when configuring the action...because they don't have read permission to the account. Even if an Admin goes in and configures the action, it won't work at run-time, because the alert runs under the owner's permissions...which can't read the credentials to use to call ServiceNow. Has anyone else ran into this issue? How can this be setup to allow non-Admins to maintain alerts?
Hi @karthi2809, check the values of payLoadInterface from the search, because they must match with the related values in the lookup, in this way, you can join them and have the Link. about the Stat...
See more...
Hi @karthi2809, check the values of payLoadInterface from the search, because they must match with the related values in the lookup, in this way, you can join them and have the Link. about the Status condition, remove it because you don't have the Status field in the stats command. Ciao. Giuseppe
As a ServiceNow Admin, this is DEFINITELY a problem on the ServiceNow side. Accounts calling the ServiceNow REST API need to be configured as web service only accounts, and have the correct roles ap...
See more...
As a ServiceNow Admin, this is DEFINITELY a problem on the ServiceNow side. Accounts calling the ServiceNow REST API need to be configured as web service only accounts, and have the correct roles applied based on what you're trying to read.
Hi @gcusello 1.Still i am not able to get Link values in the table . 2. Then the condition Status LIKE (,"%") is wrong, what do you want to check?. --->checking for Status as *
Yes, it is good practice, to create a service account. As you said, people leave and KO's become orphaned. So, if you have a service account for say business critical app, you get the users/develope...
See more...
Yes, it is good practice, to create a service account. As you said, people leave and KO's become orphaned. So, if you have a service account for say business critical app, you get the users/developers to create various private KO's for this app, then move/clone them to the main app and assign the KO's to the service account user. I don't know if having multiple services accounts is needed, but perhaps having one account per business critical app. The service account will need to have sufficient capabilities and resources based on its Splunk role and optionally you could look at workload management rules for the different roles for different workloads, so give the important service account that belongs to a role better performance than others.
Hi SMEs, while checking the log from one of the log source i could see logs are not ending properly and getting clubbed all together. Putting the snap below and seeking your best advice to fix it ...
See more...
Hi SMEs, while checking the log from one of the log source i could see logs are not ending properly and getting clubbed all together. Putting the snap below and seeking your best advice to fix it