All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello,  There must be something `rex` specific with my query below since it is not extracting the fields, while the regex works as expected when I test on regex101 (see https://regex101.com/r/g0TMS4... See more...
Hello,  There must be something `rex` specific with my query below since it is not extracting the fields, while the regex works as expected when I test on regex101 (see https://regex101.com/r/g0TMS4/1)     eventtype="my_event_type" | rex field=responseElements.assumedRoleUser.arn /arn:aws:sts::(?<accountId>\d{12}):assumed_role\/(?<assumedRoled>.*)\/vault-oidc-(?<userId>\w+)-*./ | fields accountId, assumedRole, userId Sample data that fails to match: arn:aws:sts::984086324016:assumed-role/foo-admin-app/vault-oidc-foo-admin-app-1687793763-Qen4JHeRXYlB8Eoplkjs      Thanks Alex.
Hello, I have the following search     index=wineventlog EventCode=4728 OR EventCode = 4731 OR EventCode=4729 OR EventCode=4732 OR EventCode=4756 OR EventCode=4756 NOT src_user=*$ | rename src... See more...
Hello, I have the following search     index=wineventlog EventCode=4728 OR EventCode = 4731 OR EventCode=4729 OR EventCode=4732 OR EventCode=4756 OR EventCode=4756 NOT src_user=*$ | rename src_user as admin, name as action | table admin, Group_Name, user_name     This spits out output like this:   admin Group_Name user_name adminx GroupA UserA adminx GroupB UserA adminx GroupC UserA adminy GroupD UserB adminy GroupE UserB adminy GroupF UserC adminy GroupF UserD     I'm trying to combine them into a single message that looks like this:   admin Group_Name user_name adminx GroupA,GroupB,GroupC UserA adminy GroupD,GroupE UserB adminy GroupF UserC,UserD     What would be the best way to achieve that?
Hi - I would like to join and sum the results and output The searches: index=test_index sourcetype="test_source"  className=export | table message.totalExportedProfileCounter index=test_inde... See more...
Hi - I would like to join and sum the results and output The searches: index=test_index sourcetype="test_source"  className=export | table message.totalExportedProfileCounter index=test_index sourcetype="test_source"  className=export | table message.exportedRecords From above both searches I am looking to add message.totalExportedProfileCounter, message.exportedRecords. For a given call only one of the above search shows up. I am looking for message.totalExportedProfileCounter + message.exportedRecords   Thanks in advance!   Thanks.
in my search I have no lookup command. Anyone knows why I am getting this error.
Hello I am collecting data via AWS add on and what I have found is that my timestamp recognition isn't working properly. I have a single AWS input using the [aws:s3:csv] sourcetype. this then use... See more...
Hello I am collecting data via AWS add on and what I have found is that my timestamp recognition isn't working properly. I have a single AWS input using the [aws:s3:csv] sourcetype. this then uses transforms to update the sourcetype based on the file name the data comes from. Config snips: props.conf   [aws:s3:csv] LINE_BREAKER = ([\r\n]+) SHOULD_LINEMERGE = true BREAK_ONLY_BEFORE_DATE = true FIELD_DELIMITER = , HEADER_FIELD_DELIMITER = , TRUNCATE = 20000 TRANSFORMS-awss3 =sourcetypechange:awss3-object_rolemap_audit,sourcetypechange:awss3-authz-audit-logs [awss3:object_rolemap_audit] TIME_FORMAT=%d %b %Y %H:%M:%S LINE_BREAKER = ([\r\n]+) SHOULD_LINEMERGE = false BREAK_ONLY_BEFORE_DATE = true FIELD_DELIMITER = , HEADER_FIELD_DELIMITER = , FIELD_QUOTE = " INDEXED_EXTRACTIONS = CSV HEADER_FIELD_LINE_NUMBER = 1 [awss3:authz_audit] TIME_FORMAT=%Y-%m-%d %H:%M:%S,%3Q #TZ=GMT FIELD_DELIMITER = , HEADER_FIELD_DELIMITER = , FIELD_QUOTE = " INDEXED_EXTRACTIONS = CSV HEADER_FIELD_LINE_NUMBER = 1   transforms.conf   [sourcetypechange:awss3-object_rolemap_audit] SOURCE_KEY = MetaData:Source REGEX = .*?object_rolemap_audit.csv DEST_KEY = MetaData:Sourcetype FORMAT = sourcetype::awss3:object_rolemap_audit [sourcetypechange:awss3-authz-audit-logs] SOURCE_KEY = MetaData:Source REGEX = .*?authz-audit.csv DEST_KEY = MetaData:Sourcetype FORMAT = sourcetype::awss3:authz_audit     It seems that the data comes in at indextime from what I can see, even though I set recognition for each sourcetype. I believe that timestamping is happening at the initial pass into Splunk before it gets the transforms applied.   How can i set timestamping via the initial sourcetype if there are multiple formats for the sourcetype depending on the file? Since its not honoring the timestamp recognition setting post-transforms. Thanks for the help.
I try change permission to all app option but I don't see the option. I s anyother way make my macro available for all apps.
Here are three lines of the file to illustrate what I'm going for: Line from file Desired field URI : https://URL.net/token token URI : https://URL.net/rest/v1/check rest/v1/check URI... See more...
Here are three lines of the file to illustrate what I'm going for: Line from file Desired field URI : https://URL.net/token token URI : https://URL.net/rest/v1/check rest/v1/check URI : https://URL.net/service_name/3.0.0/accounts/bah service_name I have successfully extracted the 3rd example using this:  rex field=_raw "URI.+\:\shttp.+\.(net|com)\/(?<URI_ABR>.+)\/\d+\." That does not match the other two though so no field is extracted. Is there a way to say if it doesn't match that regex then capture till the end of line? I've tried this but then the 3rd example also captures everything till the end of the line: rex field=_raw "URI.+\:\shttp.+\.(net|com)\/(?<URI_ABR>.+)(\/\d+\.|\n)"
how to get a list of splunk cloud index restores & time each restore consumed to complete
Hello, is there a way to add a control to a dashboard (in dashboard studio), a dropdown for example, to enable/disable a certain alert? Thanks!
I have the actual list of indexes in a lookup file. I ran below query to find the list of indexes with the latest ingestion time. how to find is there any index which is listed in the lookup, but not... See more...
I have the actual list of indexes in a lookup file. I ran below query to find the list of indexes with the latest ingestion time. how to find is there any index which is listed in the lookup, but not listed from the below query. index=index* | stats latest(_time) as latestTime by index | eval latestTime=strftime(latestTime,"%x %X") Can you please help
Hi, the documentation I found details the update of a two-site cluster in "site-by-site" fashion, which is solid as a rock. We normally go that way, yet w/o taking down one site's the peers at once ... See more...
Hi, the documentation I found details the update of a two-site cluster in "site-by-site" fashion, which is solid as a rock. We normally go that way, yet w/o taking down one site's the peers at once but by updating them one by none. And there is a description of a rolling update, where I did not find any mention of multi-site clusters. I tried a combination of both by rollingly updating one site and then the other, which at the end of the day did not speed up things very much, I still had to wait in the middle for the cluster to recover and become green again. Did I miss a description of the rolling update of a multi-site indexer cluster? What would be the benefit? And what's the difference anyway between going into maintenance mode and a rolling update? Thanks in advance Volkmar
How to onboard cloudwatch data to splunk using HEC
I am unable to create a data collector on my node.js application. I came across this doc " For the Node.js agent, you can create a method data collector only using the addSnapshotData() Node.js API, ... See more...
I am unable to create a data collector on my node.js application. I came across this doc " For the Node.js agent, you can create a method data collector only using the addSnapshotData() Node.js API, not the Controller UI as described here. See Node.js Agent API Reference".  I have 2 questions; how do I determine the value and key to use where do I add addSnapshotData()
How to extract fields which comes under message and failedRecords.
Hi, I've searched the forums and found one thread about getting Intune data in to Splunk which set me on a path, hopefully, to getting the data in. I'm working through the guidance here - Splunk Ad... See more...
Hi, I've searched the forums and found one thread about getting Intune data in to Splunk which set me on a path, hopefully, to getting the data in. I'm working through the guidance here - Splunk Add-on for Microsoft Cloud Services - Splunk Documentation I have requested and got an Event Hub from the team that look after Azure infrastructure. I've created the Azure app registration, set the API permissions, Access control on the Event Hub, and can see this is being successfully signed in to having configured the Splunk add-on side as well.  I'm not seeing any data come in to the index though. The one thing I'm unclear on and I haven't been able to work out the definitive answer to is whether I need an Azure storage account in order to store the date. My reading of the Event Hub configuration options suggested to me that it was capable of some form of retention to allow streaming elsewhere (e.g. setting the retention time) but perhaps that is me misinterpreting it.  Has anyone successfully got this working and if so, are you using a storage account with this? 
Need trial license for End user monitoring
Dear all, I have a list of latitude and longitude pairs from my observed events and try to get the corresponding street/city from latitude and longitude. I found this website  https://nominatim.... See more...
Dear all, I have a list of latitude and longitude pairs from my observed events and try to get the corresponding street/city from latitude and longitude. I found this website  https://nominatim.org/release-docs/develop/api/Reverse/ and the URL it provides to map from latitude and longitude to street/building: https://nominatim.openstreetmap.org/reverse?format=xml&lat=<my_lat>&lon=<my_lon>&zoom=18 ex. https://nominatim.openstreetmap.org/reverse?format=xml&lat=39.982302&lon=116.484438&zoom=18 => output is: 360building, xx streat, xx road, xx village, xx district, Beijing, 100015, China However, here is my raw data, more than 1000 records so it's impossible to input to URL manually:  log id Lat Lon 1 39.982302 116.484438 2 30.608684 114.418229   Does anyone know is there a way to treat the Lat and Lon as input parameter in the URL link by code ? Thank you.
Hi all, I'm attempting to exclude specific undesired data from the security logs. Is there a way to minimize the number of items on the exclusion list, considering that we can only add up to 10 it... See more...
Hi all, I'm attempting to exclude specific undesired data from the security logs. Is there a way to minimize the number of items on the exclusion list, considering that we can only add up to 10 items to the blacklist due to limitations. Is there a possibility of consolidating the blacklist by, for example, appending a "|" (pipe) at the end of each blacklisted item, thereby reducing the total number of entries in the blacklist while still achieving the desired exclusion? blacklist1 = EventCode="4662" Message="Object Type:(?!\s*(groupPolicyContainer|computer|user))" blacklist2 = EventCode="4648|5145|4799|5447|4634|5156|4663|4656|5152|5157|4658|4673|4661|4690|4932|4933|5158|4957|5136|4674|4660|4670|5058|5061|4985|4965" blacklist3 = EventCode="4688" Message="(?:New Process Name:).+(?:SplunkUniversalForwarder\\bin\\splunk.exe)|.+(?:SplunkUniversalForwarder\\bin\\splunkd.exe)|.+(?:SplunkUniversalForwarder\\bin\\btool.exe)" blacklist4 = EventCode="4688" Message="(?:New Process Name:).+(?:SplunkUniversalForwarder\\bin\\splunk-winprintmon.exe)|.+(?:SplunkUniversalForwarder\\bin\\splunk-powershell.exe)|.+(?:SplunkUniversalForwarder\\bin\\splunk-regmon.exe)|.+(?:SplunkUniversalForwarder\\bin\\splunk-netmon.exe)|.+(?:SplunkUniversalForwarder\\bin\\splunk-admon.exe)|.+(?:SplunkUniversalForwarder\\bin\\splunk-MonitorNoHandle.exe)|.+(?:SplunkUniversalForwarder\\bin\\splunk-winevtlog.exe)|.+(?:SplunkUniversalForwarder\\bin\\splunk-perfmon.exe)|.+(?:SplunkUniversalForwarder\\bin\\splunk-wmi.exe)|.+(?:SplunkUniversalForwarder\\bin\\splunk\-winhostinfo\.exe)" blacklist5 = EventCode="4688" Message="(?:Process Command Line:).+(?:system32\\SearchFilterHost.exe)|.+(?:find/i)|.+(?:WINDOWS\\system32\\conhost.exe)" renderXml=true
I am trying to configure SAML SSO for my Splunk Heavy Forwarders. When I upload and save SP Metadata file and remaining details. Its showing the below message on top, but configuration not getting sa... See more...
I am trying to configure SAML SSO for my Splunk Heavy Forwarders. When I upload and save SP Metadata file and remaining details. Its showing the below message on top, but configuration not getting saved in the backend and its not working.  "SAML has already been configured, Cannot add a new SAML configuration.saml" Your help is greatly appreciated.
Hi All, just wondering if anyone has a search that shows which user deleted another user in Linux  ? Typically in the linux syslog messages, when we check for userdel messages ,  it only shows the... See more...
Hi All, just wondering if anyone has a search that shows which user deleted another user in Linux  ? Typically in the linux syslog messages, when we check for userdel messages ,  it only shows the name of the user account that was deleted.  There isn't any mention of which user performed this action.   Whereas in Windows events, we see both src and target user for deletion Event IDs.  How to get this info ? I know one can manually login to host and verify the ./bash_history but how do you accomplish this from Splunk itself ?