All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

How did you install and upgrade your forwarder? RPM? deb? tgz?
Hello, I noticed that in versions upper 9.1, the user and group were changed to "splunkfwd" I have updated the universal forwarder to the newer version (9.1), but the user and group did not chang... See more...
Hello, I noticed that in versions upper 9.1, the user and group were changed to "splunkfwd" I have updated the universal forwarder to the newer version (9.1), but the user and group did not change to "splunkfwd." Subsequently, we encountered several problems related to permissions, such as the Universal Forwarder lacking permission to read auditd logs. Therefore, it is necessary to modify the "log_group" parameter in the auditd.conf file. Should I manually change it, or is there an alternative solution to resolve all permission problems?
Hi @Drewprice , good for you, see next time! Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated by all the contributors
Hi @kyokei, this is another question, even if on the same data. Anyway, if you want to discard a part of your events at index time, you have to use the SEDCMD command in your props.conf [your_sour... See more...
Hi @kyokei, this is another question, even if on the same data. Anyway, if you want to discard a part of your events at index time, you have to use the SEDCMD command in your props.conf [your_sourcetype] SEDCMD = s/^([^,]*,)([^,]*,)(.*)/([^,]*,)()(.*)/g Ciao. Giuseppe
makeresults can be used to generate search results. An example from the documentation. | makeresults | eval test="buttercup rarity tenderhoof dash mcintosh fleetfoot mistmane" | makemv delim=" " t... See more...
makeresults can be used to generate search results. An example from the documentation. | makeresults | eval test="buttercup rarity tenderhoof dash mcintosh fleetfoot mistmane" | makemv delim=" " test | mvexpand test _time test 2024-01-01 00:00:00 buttercup 2024-01-01 00:00:00 rarity 2024-01-01 00:00:00 tenderhoof 2024-01-01 00:00:00 dash 2024-01-01 00:00:00 mcintosh 2024-01-01 00:00:00 fleetfoot 2024-01-01 00:00:00 mistmane   Then you can use `search` or any other commands as usual. | makeresults | eval test="buttercup rarity tenderhoof dash mcintosh fleetfoot mistmane" | makemv delim=" " test | mvexpand test | search test="m*" _time test 2024-01-01 00:00:00 mcintosh 2024-01-01 00:00:00 mistmane
@splunkcol  the update of plugin worked - your reply helped my friend. I wish you the best.
Hi, forgive my English, I'm using a translator. About the problem it is better that you send a ticket to Splunk directly, it seems to be a bug in the end I gave up because whenever they ask about th... See more...
Hi, forgive my English, I'm using a translator. About the problem it is better that you send a ticket to Splunk directly, it seems to be a bug in the end I gave up because whenever they ask about the version of splunk they always make the excuse that they do not support old versions of Splunk, in this case although it is true that I was using an old version of Splunk clearly the problem was the add-on.
Hi @splunkcol  what was your fix? We have exact same issue. Its a mandatory field as soon we update the secret key - and doesn't allow save. 
Hi @gcusello , With your help i am able to extract the timestamp, _time = 23-11-26 01:20:51.500 AM but _time is same for each event. 23-11-26 01:20:51.500 AM, +0.000000000E+00,+2.90500E+00,0 23-1... See more...
Hi @gcusello , With your help i am able to extract the timestamp, _time = 23-11-26 01:20:51.500 AM but _time is same for each event. 23-11-26 01:20:51.500 AM, +0.000000000E+00,+2.90500E+00,0 23-11-26 01:20:51.500 AM,+1.000000000E-01,+1.45180E+01,0 23-11-26 01:20:51.500 AM,+2.000000000E-01,+7.93600E+00,0 23-11-26 01:20:51.500 AM,+3.000000000E-01,+3.60100E+00,0 23-11-26 01:20:51.500 AM,+4.000000000E-01,+3.19100E+00,0 23-11-26 01:20:51.500 AM,+5.000000000E-01,+3.17300E+00,0   How can i achieve below format during data ingest? 23-11-26 01:20:51.500 AM, +2.90500E+00,0 23-11-26 01:20:51.600 AM, +1.45180E+01,0 23-11-26 01:20:51.700 AM, +7.93600E+00,0 23-11-26 01:20:51.800 AM, +3.60100E+00,0 23-11-26 01:20:51.900 AM, +3.19100E+00,0 23-11-26 01:20:52.000 AM, +3.17300E+00,0 Basically, add those duration under time to the trigger time to create _time. "Time","U1-2[]","Event" +0.000000000E+00,+2.90500E+00,0 +1.000000000E-01,+1.45180E+01,0 +2.000000000E-01,+7.93600E+00,0 +3.000000000E-01,+3.60100E+00,0 +4.000000000E-01,+3.19100E+00,0 Thanks a lot for your help.
@PickleRick Thanks for the reply. Please, ignore both searches. What I want to pull out the total unique assets in the DHCP source. I then want to be able to compare to the totals of unique assets ... See more...
@PickleRick Thanks for the reply. Please, ignore both searches. What I want to pull out the total unique assets in the DHCP source. I then want to be able to compare to the totals of unique assets in the SysMon source and output these assets that do not have SysMon present. Thanks in advance.
Thank you so much! I was going down that track but could not put it together.
If you look into outputs.conf specs, you'll see that it supports both SQS output as well as RFS output which should be able to write into S3 buckets. Never used them myself though so I have no idea h... See more...
If you look into outputs.conf specs, you'll see that it supports both SQS output as well as RFS output which should be able to write into S3 buckets. Never used them myself though so I have no idea how they work and whether they require HF or if they will work with UF as well (I suspect the former).
It's not clear how you distinguish your "sources". Your first search simply pulls data from two separate indexes while the second one does something completely strange. Please describe what constitu... See more...
It's not clear how you distinguish your "sources". Your first search simply pulls data from two separate indexes while the second one does something completely strange. Please describe what constitutes those sets of sources you want to calculate difference from.
》An other intorragation, is it normal to only have default DataModels visible (and not all my Datamodels) from CM (Settings/DataModels)  ? My DM are ok.... sorry for that
Hello Community, I have a challenge finding and isolating the unique hosts out of two sources (DHCL and SysMon in my case) I did try the following but it did work as expected: EXAMPLE 1: index=dh... See more...
Hello Community, I have a challenge finding and isolating the unique hosts out of two sources (DHCL and SysMon in my case) I did try the following but it did work as expected: EXAMPLE 1: index=dhcp_source_index | stats count by host | eval source="dhcp" | append [ search index=sysmon_index | stats count by host | eval source="sysmon" ] | stats values(source) as sources by host | where mvcount(sources)=1 AND sources="dhcp"   EXAMPLE 2: index=my_index | dedup host, source | stats list(source) as sources by host | append [search index=my_index | stats latest(_time) as last_seen by host] | eventstats max(last_seen) as last_seen by host | where mvcount(sources)=1 | table host, last_seen The numbers from the manual findings and the above SPLs differ Thanks in advance
Thanks for your return, You are right. The decomissionned indexer is now on state "Graceful shutdown" and buckets count is 0. Took 2.5 days to decomission 20 To of datas.  But SF / RF is still not... See more...
Thanks for your return, You are right. The decomissionned indexer is now on state "Graceful shutdown" and buckets count is 0. Took 2.5 days to decomission 20 To of datas.  But SF / RF is still not green. 3 SF tasks are still in pending, i tried to resync thems but no change.  Should I now do a rolling restart after removed my decomissionned indexer in order to get back my SF / RP ?  Or simply restart my CM splunk deamon ? An other intorragation, is it normal to only have default DataModels visible (and not all my Datamodels) from CM (Settings/DataModels)  ? Many thanks   
Here's an untested idea.  Install an HF on the server and use Splunk's Ingest Actions feature to write the data to S3.  It's not clear if the HF will be happy only writing to S3 or if it also will wa... See more...
Here's an untested idea.  Install an HF on the server and use Splunk's Ingest Actions feature to write the data to S3.  It's not clear if the HF will be happy only writing to S3 or if it also will want to send to an indexer. See https://docs.splunk.com/Documentation/Splunk/9.1.2/Data/DataIngest#Heavy_forwarders_managed_through_a_deployment_server for details, including the need for a Deployment Server.
Do not restart the decommissioned indexer. If the indexer stopped running then it has finished its work and the server can be retired.  Consider restarting the CM to force it to rebuild the bucket t... See more...
Do not restart the decommissioned indexer. If the indexer stopped running then it has finished its work and the server can be retired.  Consider restarting the CM to force it to rebuild the bucket table.
Hi @williamcclark, The ldapsearch command attrs argument is similar to the Get-ADUser cmdlet Properties parameter; however, unlike Get-ADUser, ldapsearch does not return a default set of LDAP attrib... See more...
Hi @williamcclark, The ldapsearch command attrs argument is similar to the Get-ADUser cmdlet Properties parameter; however, unlike Get-ADUser, ldapsearch does not return a default set of LDAP attributes. Using ldapsearch without the attrs argument is equivalent to running Get-ADUser -Properties *. (Technically, the default value for attrs is the Python constant ldap3.ALL_ATTRIBUTES, which evaluates to *.) To limit the attributes returned, provide a comma-delimited list to the attrs argument: | ldapsearch attrs="sn,givenName,sAMAccountName" In the add-on code, "Invalid attributes types in attrs list" is returned when a requested attribute is not present in the directory schema. How are you using the ldapsearch command? Is it being used by another app or add-on? Does the use case expect a schema extension that isn't installed on your target directory? For example, are you searching for Exchange-related attributes in a directory that does not have the Exchange schema extensions installed?
Hi @bhagyashriyan, Any Google Cloud solution that allows you to submit HTTP requests, parse HTTP responses, and publish messages to a Google Cloud Pub/Sub topic can use the Splunk Cloud REST API, as... See more...
Hi @bhagyashriyan, Any Google Cloud solution that allows you to submit HTTP requests, parse HTTP responses, and publish messages to a Google Cloud Pub/Sub topic can use the Splunk Cloud REST API, assuming REST API access is enabled and granted to the source Google Cloud egress address(es). You can execute Splunk searches using the Splunk Cloud REST API search/jobs endpoint. Here's a simple Bash command-line example using curl, jq, and gcloud: curl -s -u username:password https://<deployment-name>.splunkcloud.com:8089/services/search/jobs -d search="| makeresults count=10" -d exec_mode=oneshot -d output_mode=json | jq -r '.results[] | tojson | @sh' | while IFS= read message; do gcloud pubsub topics publish mytopic --message=${message}; done Replace <deployment-name> with your Splunk Cloud stack name and mytopic with your Google Cloud Pub/Sub topic name. This example assumes gcloud is already correctly configured. You can also use Splunk Cloud access tokens instead of username/password authentication. See https://docs.splunk.com/Documentation/SplunkCloud/latest/RESTTUT/RESTandCloud  and https://docs.splunk.com/Documentation/SplunkCloud/latest/RESTREF/RESTsearch#search.2Fjobs for more information. I don't work in Google Cloud day to day, so I recommend browsing the Google Cloud documentation for inspiration.