All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I'm trying to export raw linux audit logs to a file.  For example:       splunk.exe "sourcetype=linux:audit _time>xxxx _time<xxxxx" -output rawdata -maxout 0 > outputfile.txt       I'... See more...
I'm trying to export raw linux audit logs to a file.  For example:       splunk.exe "sourcetype=linux:audit _time>xxxx _time<xxxxx" -output rawdata -maxout 0 > outputfile.txt       I'm trying to output a weeks worth but I'm not sure how many event records there are.  I tried setting maxout to 500000 and monitoring using task manager, I will see splunk grow to use 20GB of memory at its peak. I tried setting maxout to 1000000 and it used up all of my free memory. That actual rawdata output is only a few hundred MB, why is it using up so much memory. More importantly, is there a workaround or fix so it doesnt use up so much memory?  I could output in smaller time increments (daily for example) but I don't know if there might be a single day that happened to generate alot of events.  I suppose I could go down to hourly.    
  I initialize a lookup file using:   | makeresults | outputlookup status.csv   I then have this simple search:   | inputlookup status.csv | eval a=if(isnull(a),1,a+1) | outputlookup st... See more...
  I initialize a lookup file using:   | makeresults | outputlookup status.csv   I then have this simple search:   | inputlookup status.csv | eval a=if(isnull(a),1,a+1) | outputlookup status.csv   This works fine when I run it in Search.  The "a" value gets incremented each time the search is executed. But if I put the same thing in a dashboard it results in an error and does not update the lookup table. In a classic dashboard, the panel with this search just says "Search was cancelled". In dashboard studio, a panel with this search as its data source reports this error: "Unexpected non-whitespace character after JSON at position 4" Removing the outputlookup command in either case makes the search work, and it shows an up-to-date value for "a" (given how many times the search was executed in Search). I have no clue what I'm doing wrong with what seems like such a simple thing - really hoping someone can help me!
I am having no luck listing users' memberships with in a group, using ldapsearch. I am not an AD LDAP expert, either. Lets say I have a domain called Foo, and an OU (group) called Bar, with 10 user... See more...
I am having no luck listing users' memberships with in a group, using ldapsearch. I am not an AD LDAP expert, either. Lets say I have a domain called Foo, and an OU (group) called Bar, with 10 users.  Each user has additional memberships to other groups. I am looking to list the membership attr for each user. I am starting with  | ldapsearch domain=default search="(&(objectClass=user))"... but I don't know what to add. Thank you 
Are we able to enable cron style schedule for the custom Query input?  We have a use case where we want to run a Solarwinds query once per day at a certain time.   I've tried updating the inputs.co... See more...
Are we able to enable cron style schedule for the custom Query input?  We have a use case where we want to run a Solarwinds query once per day at a certain time.   I've tried updating the inputs.conf file and it checks ok.  The log shows it shedule and reschedule for the next time, but no data shows up.
Hello! I have recently just downloaded Splunk on my MAC for experimenting/practicing searching and dashboarding. I just picked a random csv file that has planetary information. One of the fields ... See more...
Hello! I have recently just downloaded Splunk on my MAC for experimenting/practicing searching and dashboarding. I just picked a random csv file that has planetary information. One of the fields in my .csv file has a mix of numbers without commas, and three numbers that have a comma. EX: 59,800 I think this is causing those values to not show up in my visualization. Is there a way to remove said comma from the field value? I tried using this below in the source code under the visualization but it says it's an unknown option name. <option name="useThousandSeparators">false</option>  
Hi (お世話になっております) An application logs to "/var/log/messages". (ある既製のアプリケーションから、/var/log/messages にログが出力されています。) However, unfortunately, the delimiter is \x09. (但し、区切り文字が、\x09 となっています。) Is it pos... See more...
Hi (お世話になっております) An application logs to "/var/log/messages". (ある既製のアプリケーションから、/var/log/messages にログが出力されています。) However, unfortunately, the delimiter is \x09. (但し、区切り文字が、\x09 となっています。) Is it possible to replace the delimiter with a space or comma on the "suplunk Universal forwarder" side and forward it? ("suplunk universal fowarder" 側で、区切り文字をスペースやカンマに置き換えてから転送することは可能でしょうか?) The version of 'splunk' is unknown. ("splunk"のバージョンは不明です。) The version of "suplunk Universal forwarder" is "9.0.1". ("suplunk universal fowarder"のバージョンは、"9.0.1"です。) "suplunk Universal forwarder" is installed in RHEL8.5. ("suplunk universal fowarder"は、RHEL8.5にインストールしています。) Thanks! (よろしくお願いいたします)
The UF service failed to start after a reboot on a Windows Server.   I've addressed that issue, but there are logs that were generated during the downtime that are not being forwarded.  Is there any ... See more...
The UF service failed to start after a reboot on a Windows Server.   I've addressed that issue, but there are logs that were generated during the downtime that are not being forwarded.  Is there any way to force the entries up?
Hello, Is there a way to have a playbook automatically trigger when a file is added to an S3 bucket in our AWS account? My initial thought is to have an AWS lambda trigger when a file is added to t... See more...
Hello, Is there a way to have a playbook automatically trigger when a file is added to an S3 bucket in our AWS account? My initial thought is to have an AWS lambda trigger when a file is added to the S3 bucket, then have that lambda publish the file event information to a kafka topic, then have our Splunk SOAR hooked up to poll that kafka topic via the Kafka SOAR App, then have the playbook set up to trigger when something comes in on that poll (if that's even possible). Is this the best way to go about this? Thank you!
When conducting searches, we have observed that the SPL searches were not working based on the "earliest" time range in the SPL search itself. It only worked when we choose the configured presents.
How can you get Splunk Universal Forwards on every host in a Windows Domain in an isolated environment (no internet access)?
hello is it possible to use a base search in a subsearch? I would like to call the base search   <search id="signal1"> <query>`index=test </query> <earliest>$date.earlie... See more...
hello is it possible to use a base search in a subsearch? I would like to call the base search   <search id="signal1"> <query>`index=test </query> <earliest>$date.earliest$</earliest> <latest>$date.latest$</latest> </search>    in my subsearch something like this?   <search base="signal1"> <query>index=test | stats count as "Nombre total d'erreurs" | appendcols [ search base="signal1" > <query>index=test | stats count as "Nombre total d'erreurs"</query>   thanks  
Hi Team, Is there any way to add TimeToken with timewrap on the dashboard. I have a dashboard ready to display this week data to compare with last week data having timewrap with 7d. But, I w... See more...
Hi Team, Is there any way to add TimeToken with timewrap on the dashboard. I have a dashboard ready to display this week data to compare with last week data having timewrap with 7d. But, I would like to add token to replace the 7d value as per choice. Search query:    index=ABC sourcetype="xyz" data earliest= -14d@d latest= @s | timechart span=15m partial=false count by data | timewrap 7d series=short |table _time, s0, s1 | rename s0 as this_week, s1 as last_week,  
Not sure if I am putting this in the correct area; my apologies ahead of time. I wanted to know if it would be possible to have Splunk dynamically populate a table based on incoming log messages. The... See more...
Not sure if I am putting this in the correct area; my apologies ahead of time. I wanted to know if it would be possible to have Splunk dynamically populate a table based on incoming log messages. The log messages for new alerts and cleared alerts are essentially the same, save for one "field" that shows either "NEW" or "CLEARED". ##Example of New Alert Log Message##  2022-10-06 05:58:31 AlarmNotification = NEW AlarmID = STRING: "123456789" AlarmType = INTEGER: 1 ObjectInstance = STRING: "Router1" EventTime = STRING: "2022-10-6,5:58:31.7,-7:0" SpecificProblem = STRING: "LinkDown" Severity = INTEGER: 2   ##Example of Clear Alert Log Message##   2022-10-06 05:58:35 AlarmNotification = CLEARED AlarmID = STRING: "123456789" AlarmType = INTEGER: 1 ObjectInstance = STRING: "Router1" EventTime = STRING: "2022-10-6,5:58:35.5,-7:0" SpecificProblem = STRING: "LinkDown" Severity = INTEGER: 2   ----------------------------------- My idea was anytime a new alert comes in, a table with the various fields is generated; I can already do that today. However, what I am not sure about is if a subsequent "clear" log message comes in where everything matches (with the exception of the AlarmNotification and EventTime), it would dynamically REMOVE that table row entry.   So the general idea is show the alerts when they come in, but if a cleared alert message that comes in with a later date and time would "delete" that row from the table.   Any and all suggestions are welcomed. Thank you in advance.
Hello All, I have just installed RHEL 9.0 as a POC and would like to install Enterprise 9.0.1 The compatatbilty charts is kernel based rather then OS version based. Anyway,  the kernel version ... See more...
Hello All, I have just installed RHEL 9.0 as a POC and would like to install Enterprise 9.0.1 The compatatbilty charts is kernel based rather then OS version based. Anyway,  the kernel version compatability shows 5.4.x or greater The kernel version for RHEL 9.0 is 5.14.x Is this a typo on your side?
Hello Multiple PCs can access the same ID when connecting the web to the splunk. Even if I connect to multiple PCs with the same ID, I only keep the last session I accessed, and the PCs I logged in... See more...
Hello Multiple PCs can access the same ID when connecting the web to the splunk. Even if I connect to multiple PCs with the same ID, I only keep the last session I accessed, and the PCs I logged in to before are looking for a way to disconnect the session I accessed. In a single instance, the session could be cut off as follows. ./splunk _internal call "/services/authentication/httpauth-tokens/[SESSION_ID]" -method DELETE  The above SESSION_ID used the other field value of the splunk ui access log. However, this method does not work in a search header cluster. Can I have a search header cluster maintain only the last session I accessed when connecting from multiple PCs with the same ID? I look forward to hearing from you.  
I'm really bad when it comes to join searches, though I've been doing this for years.  I'm able to find the list of orphaned searches using:   | rest /servicesNS/-/-/admin/directory count=0 spl... See more...
I'm really bad when it comes to join searches, though I've been doing this for years.  I'm able to find the list of orphaned searches using:   | rest /servicesNS/-/-/admin/directory count=0 splunk_server=<splunkserver> | rename eai:* as *, acl.* as * | eval updated=strptime(updated,"%Y-%m-%dT%H:%M:%S%Z"), updated=if(isnull(updated),"Never",strftime(updated,"%d %b %Y")) | sort type | eval sAMAccountName=owner | stats count by title orphaned sAMAccountName sharing type owner updated app disabled | search orphaned=1   and we have a summary index containing our LDAP users & managers for those users. Using the following search returns users and their managers:   index=metrics_summary source="LDAP*" source IN("LDAP GROUP USER DIVISION Summary Index Search" "LDAP_GROUP_USER_DIVISION_Summary_Index_Search" lookup_ldap_group_user_division) sAMAccountName=e* OR sAMAccountName=v* |table sAMAccountName displayName mail department division manager   But I haven't been able to join the two searches together to give me the manager name of the user w/ the orphan search. I've tried variations of the following:   | rest /servicesNS/-/-/admin/directory count=0 splunk_server=<splunkserver> | rename eai:* as *, acl.* as * | eval updated=strptime(updated,"%Y-%m-%dT%H:%M:%S%Z"), updated=if(isnull(updated),"Never",strftime(updated,"%d %b %Y")) | sort type | eval sAMAccountName=owner | stats count by title orphaned sAMAccountName sharing type owner updated app disabled | search orphaned=1 | join sAMAccountName type=outer max=0 [|search index=metrics_summary source="LDAP*" source IN("LDAP GROUP USER DIVISION Summary Index Search" "LDAP_GROUP_USER_DIVISION_Summary_Index_Search" lookup_ldap_group_user_division) | stats latest(_time) AS latest values(displayName) values(mail) values(distinguishedName) values(department) values(division) latest(userAccountControl) values(manager) by sAMAccountName | rename values(*) AS *, latest(*) AS *]   but this only comes back w/ results from the rest call.  I know I get results using the summary index search. How do I merge these?   Thanks
Hello , I have splunk logger line like below: Address: XXX HttpMethod: POST  Headers: {Ama-Internal-REST-Service=hotel/booking, , Ama-Internal-Protocol=HTTP, Message-Type=RPWREQ} Payload: {"chann... See more...
Hello , I have splunk logger line like below: Address: XXX HttpMethod: POST  Headers: {Ama-Internal-REST-Service=hotel/booking, , Ama-Internal-Protocol=HTTP, Message-Type=RPWREQ} Payload: {"channel":"noChannel","conversationId":"12345","version":"1.0","agent":"noAgent","date":"2023-01-01","events":[{"action":"Update","objectAfter":{"chainCode":"BLR","brandCode":"ES","propertyCode":"HYATT"},"type":"Property"}]} I need to extract payload after  Payload: And then stats as table where columns are all field in payload.  for eg: TABLE OUTPUT: channel  conversationId  version date  chaincode  propertycode  type  
Hi Experts, Need your quick suggestion/support. Trying to make an integration between Splunk with Salesforce using Splunk_TA_salesforce. Getting below SSL error. 2022-10-06 09:24:35,525 ERROR pid... See more...
Hi Experts, Need your quick suggestion/support. Trying to make an integration between Splunk with Salesforce using Splunk_TA_salesforce. Getting below SSL error. 2022-10-06 09:24:35,525 ERROR pid=22321 tid=MainThread file=task.py:_send_request:475 | [stanza_name=monitoring__c] Error occurred in request url=https://company-business--preprod01.sandbox.salesforce.com/services/data/v54.0/query?q=SELECT%20Crea... method=GET reason=HTTP Error HTTPSConnectionPool(host='company-business--preprod01.sandbox.my.salesforce.com', port=443): Max retries exceeded with url: /services/data/v54.0/query?q (Caused by SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1106)'))) Have placed a certificate pem file in $SPLUNK_HOME/etc/auth/certs/ and defined this path in splunk_ta_salesforce_settings.conf as ca_certs_path = /opt/splunk/etc/auth/certs/cert_chain.pem But still we get an SSL error, how to disable SSL certificate verifification in the Add-On 'Splunk_TA_salesforce'? Please suggest. Thanks
Hi all. It might sound weird but I need assistance converting Azure Sentinel queries to SPL. The main goal is to use Microsoft's new exchange vulnerability detection methods. So if you got one re... See more...
Hi all. It might sound weird but I need assistance converting Azure Sentinel queries to SPL. The main goal is to use Microsoft's new exchange vulnerability detection methods. So if you got one ready to use, please share These are the codes I wish to have in SPL: https://github.com/Azure/Azure-Sentinel/blob/08a8d2b9c5c9083e341be447773a34b56b205dee/Detections/W3CIISLog/ProxyShellPwn2Own.yaml https://github.com/Azure/Azure-Sentinel/blob/master/Detections/SecurityEvent/ExchangeOABVirtualDirectoryAttributeContainingPotentialWebshell.yaml https://github.com/Azure/Azure-Sentinel/blob/master/Hunting%20Queries/W3CIISLog/WebShellActivity.yaml https://github.com/Azure/Azure-Sentinel/blob/master/Detections/W3CIISLog/MaliciousAlertLinkedWebRequests.yaml https://github.com/Azure/Azure-Sentinel/blob/master/Hunting%20Queries/Microsoft%20365%20Defender/Execution/exchange-iis-worker-dropping-webshell.yaml https://github.com/Azure/Azure-Sentinel/blob/master/Hunting%20Queries/W3CIISLog/PotentialWebshell.yaml source: https://www.microsoft.com/security/blog/2022/09/30/analyzing-attacks-using-the-exchange-vulnerabilities-cve-2022-41040-and-cve-2022-41082/     Thank you!!
Tell me, what should I do in my case, I need from the field: 1.SAPS-SIS.TO.LSP.SEND, or: "12.SAPS-SIS.TO.LSP.RECEIVE Get field: "routepointIDnum": "1" or "routepointIDnum": "12" I tried like this ... See more...
Tell me, what should I do in my case, I need from the field: 1.SAPS-SIS.TO.LSP.SEND, or: "12.SAPS-SIS.TO.LSP.RECEIVE Get field: "routepointIDnum": "1" or "routepointIDnum": "12" I tried like this and it almost works: index="main" sourcetype="testsystem-script333" | eval routepointID_num=substr(routepointID,1,2) | table routepointID_num Almost because I get: "routepointIDnum": "1." or "routepointIDnum": "12" And I need: "routepointIDnum": "1" or "routepointIDnum": "12"