All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

  Hi, I'm trying to get the Guard duty log using the Splunk Add-on for AWS app. The input method is Generic S3, and logs from cloudtrail or WAF come in well, but the Guard duty log is not comi... See more...
  Hi, I'm trying to get the Guard duty log using the Splunk Add-on for AWS app. The input method is Generic S3, and logs from cloudtrail or WAF come in well, but the Guard duty log is not coming in. Of course, the data is in the S3 bucket. I'm attaching the guard duty.log.   Thank you.
Hello everyone! How can we solve the problem of searching for secrets in all or some splunk indexes so that splunk is not heavily loaded: how can this be implemented? (approach).  It is obvious that... See more...
Hello everyone! How can we solve the problem of searching for secrets in all or some splunk indexes so that splunk is not heavily loaded: how can this be implemented? (approach).  It is obvious that the list of indexes needs to be limited. What else?
@loganramirez  To schedule a PDF email to a mail server that does not require SMTP authentication, you must have the list_settings capability and use the sendemail command. If you want users who do ... See more...
@loganramirez  To schedule a PDF email to a mail server that does not require SMTP authentication, you must have the list_settings capability and use the sendemail command. If you want users who do not have the admin, splunk-system-role, or can_delete roles to be able to send email notifications from their searches, you must grant them the list_settings capability. By default, only the admin, splunk-system-role, and can_delete roles have access to list_settings.    
I found below response handler, will this work or does it require any modification? as per the sample in my original request. class ArrayHandler: def __init__(self,**args): pass def __call__(self... See more...
I found below response handler, will this work or does it require any modification? as per the sample in my original request. class ArrayHandler: def __init__(self,**args): pass def __call__(self, response_object,raw_response_output,response_type,req_args,endpoint,oauth2=None): if response_type == "json": raw_json = json.loads(raw_response_output) column_list = [] for column in raw_json['columns']: column_list.append(column['name']) for row in raw_json['rows']: i = 0; new_event = {} for row_item in row: new_event[column_list[i]] = row_item i = i+1 print(print_xml_stream(json.dumps(new_event))) else: print_xml_stream(raw_response_output)
Nope, I think I ended up with using sed in props to remove the offending " ".
Hello @ccampbell, The reason you are not able to upgrade the app to the latest version is that the app is not listed to be compatible with SplunkCloud platform on Splunkbase. If you wish to have the... See more...
Hello @ccampbell, The reason you are not able to upgrade the app to the latest version is that the app is not listed to be compatible with SplunkCloud platform on Splunkbase. If you wish to have the updated package available for upgrade on SplunkCloud platform, you'll need to have the developer of the app update the platform compatibility details. Please refer to the following screenshot displaying platform compatibility for the app.   I assume that the app inspect vetting would have failed for the app package and hence platform compatibility for SplunkCloud would be missing. As a solution to this, you can download the app package from Splunkbase, modify the app_id (app.conf and the folder name, and anywhere else within the package if used), repackage the app and upload it as a private app. This approach is not recommended since it doesn't allow you to track and stay updated with the future updates. Additionally, while uploading the private app, if the app inspect fails, you'll need to work on fixing the errors and repackage the app again and vet it continuously until it succeeds the vetting process. The best approach would be to have the developer engaged and make the app compatible to SplunkCloud platform.   Thanks, Tejas   --- If the above solution helps, an upvote is appreciated.!!  
Hello @shai, In this scenario, you'll need to combine your certs along with SplunkCloud certificates. Just append the CA file to include self signed certificate and SplunkCloud rootCA and use the sa... See more...
Hello @shai, In this scenario, you'll need to combine your certs along with SplunkCloud certificates. Just append the CA file to include self signed certificate and SplunkCloud rootCA and use the same for communication. This chain will help you communicate with both Splunk Enterprise On-Prem and SplunkCloud environment.   Thanks, Tejas.   --- The above solution helps, an upvote is appreciated.!! 
Hi @TheEggi98 , good for you, see next time! Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated
Alright Thank you i will use sourcetype and index overriding and then make the data of the newly added available for our qs cluster to build dashboards
We have below data in json format, i need help with a custom json response handler so splunk can break every event separately.  Each event starts with the record_id { "eventData": [ { "record_id"... See more...
We have below data in json format, i need help with a custom json response handler so splunk can break every event separately.  Each event starts with the record_id { "eventData": [ { "record_id": "19643", "eventID": "1179923", "loginID": "PLI", "userDN": "cn=564SD21FS8DF32A1D87FAD1F,cn=Users,dc=us,dc=oracle,dc=com", "type": "CredentialValidation", "ipAddress": "w.w.w.w", "status": "success", "accessTime": "2024-08-29T06:23:03.487Z", "oooppd": "5648sd1csd-952f-d630a41c87ed-000a3e2d", "attributekey": "User-Agent", "attributevalue": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/128.0.0.0 Safari/537.36" }, { "record_id": "19644", "eventID": "1179924", "loginID": "OKP", "userDN": "cn=54S6DF45S212XCV6S8DF7,cn=Users,dc=us,dc=CVGH,dc=com", "type": "Logout", "ipAddress": "X.X.X.X", "status": "success", "accessTime": "2024-08-29T06:24:05.040Z", "oooppd": "54678S3D2FS962SDFV3246S8DF", "attributekey": "User-Agent", "attributevalue": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/128.0.0.0 Safari/537.36" } ] }
Hi @TheEggi98 , if the file to read is always the same in both inputs, Splunk doesn't read twice a file and the solution is the second one I described (overriding). If instead you have different fi... See more...
Hi @TheEggi98 , if the file to read is always the same in both inputs, Splunk doesn't read twice a file and the solution is the second one I described (overriding). If instead you have different files in the same path to read in the two inputs, you can specify in the input stanza the different file name to read also using the same path. Ciao. Giuseppe
Hi @gcusello  thanks for the fast response. if im not wrong i theoretically could bypass the precedence by doing this (at least btool dont complain) but i will not do that [monitor://<path to lo... See more...
Hi @gcusello  thanks for the fast response. if im not wrong i theoretically could bypass the precedence by doing this (at least btool dont complain) but i will not do that [monitor://<path to logfile>.log] ... [monitor://<path to same logfile>.lo*] ...   When overriding sourcetype and index on the indexer, am i able to route data of the second sourcetype to our qs cluster to build dashboards?
Hi @TheEggi98 , you cannot read the same files in two input stanzas, ony one (by precedence rules) will be used. If in the same path, you have to read different files for each input, you can specif... See more...
Hi @TheEggi98 , you cannot read the same files in two input stanzas, ony one (by precedence rules) will be used. If in the same path, you have to read different files for each input, you can specify in the stanzas the correct file to read. If instead data are in the same file, the only solution is to read it with one input stanza and then override index and eventually sourcetype values on the Indexers or (if present) on Heavy Forwarders, following the instructions at  for sourcetype https://docs.splunk.com/Documentation/SplunkCloud/8.2.2203/Data/Advancedsourcetypeoverrides?_gl=1*4u2o7n*_gcl_au*NTk4MTY5Ny4xNzI0ODM2ODI0*FPAU*NTk4MTY5Ny4xNzI0ODM2ODI0*_ga*MTY2Mzg1NDI2Mi4xNzI0ODM2ODI0*_ga_5EPM2P39FV*MTcyNDkxNDM3OC41LjEuMTcyNDkxNTE2NS4wLjAuMTM4NTMxMDQ2NA..*_fplc*SVZreWQzalBQTTVYVjFvczZ3Sm45a3lBd1REUGtiV3c4bktjeDdzejliWm9NbEYlMkJ2Z2VGb2E4JTJCYzdsNld4QSUyQmJ0NnAwVTNKaU93OWJGbk1uSmVBa0R3M3l4ZWcyNElnZFZISldBS0VlOSUyRmxycm00UUp5NXdDd2xXb3clMkJXQSUzRCUzRA.. and for index https://community.splunk.com/t5/Getting-Data-In/Route-data-to-index-based-on-host/td-p/10887?_gl=1*1079w7n*_gcl_au*NTk4MTY5Ny4xNzI0ODM2ODI0*FPAU*NTk4MTY5Ny4xNzI0ODM2ODI0*_ga*MTY2Mzg1NDI2Mi4xNzI0ODM2ODI0*_ga_5EPM2P39FV*MTcyNDkxNDM3OC41LjEuMTcyNDkxNTIxNS4wLjAuMTM4NTMxMDQ2NA..*_fplc*M29uUHdZbnRsT3VuRlgxaktXenFld01sdDJ1dXVqcFExYTRDSFF5OTZRd2pTRmFzRGE4OU11YUZaS0dtdG5iSWNuckRxTGRFT2l4cDVrZDlQTnNLUTFEOVVIemRxQyUyQjhyTmpvJTJGeUZ5bUs1Vng2eWtkMUxWcEpSdDFQWEtZQSUzRCUzRA.. Ciao. Giuseppe
Try something like this | eval root=mvjoin(mvindex(split(policy,"_"),0,1),"_") | eval version=mvindex(split(policy,"_"),2) | timechart span=48h values(version) as version by root | eval date=if(_tim... See more...
Try something like this | eval root=mvjoin(mvindex(split(policy,"_"),0,1),"_") | eval version=mvindex(split(policy,"_"),2) | timechart span=48h values(version) as version by root | eval date=if(_time < relative_time(now(),"-2d"), "Last 48 Hours", "Today") | fields - _time _span | transpose 0 header_field=date column_name=policy | eval "New version"=if('Last 48 Hours' == Today, null(), Today)
Hi @btheneghan , if you already extracted the field manual_entry and the format is always the one you descripted in your samples, you could use this regex: | rex field=manual_entry "^\#\d+\s(?<manu... See more...
Hi @btheneghan , if you already extracted the field manual_entry and the format is always the one you descripted in your samples, you could use this regex: | rex field=manual_entry "^\#\d+\s(?<manual_entry>.*)" if you didn't extracted the field manual_entry and the format is always the one you descripted in your samples, you could use: | rex "^\#\d+\s(?<manual_entry>.*)"  Ciao. Giuseppe
Hi there, i have a file monitoring stanza on a universal forwarder where i filter using transforms.conf to only get logentries i need, because the server writes logentries of multiple business proce... See more...
Hi there, i have a file monitoring stanza on a universal forwarder where i filter using transforms.conf to only get logentries i need, because the server writes logentries of multiple business processes into the same logfile. Now i need entries of another process with different ACL in a different index from that logfile but in our QS cluster while the first datainput still ingests into our PROD cluster So i have my inputs.conf [monitor://<path_to_logfile>] disabled = 0 index = <dataspecific index 1> sourcetype = <dataspecific sourcetype 1> a props.conf [<dataspecific sourcetype 1>] SHOULD_LINEMERGE = true BREAK_ONLY_BEFORE_DATE = true TRUNCATE = 1500 TIME_PREFIX = ^ MAX_TIMESTAMP_LOOKAHEAD = 20 TIME_FORMAT = [%y/%m/%d %H:%M:%S] TRANSFORMS-set = setnull, setparsing and a transforms.conf [setnull] REGEX = . DEST_KEY = queue FORMAT = nullQueue [setparsing] REGEX = (<specific regex>) DEST_KEY = queue FORMAT = indexQueue As standalone Stanza i would need the new input like this, with its own setparsing transforms [monitor://<path_to_logfile>] disabled = 0 index = <dataspecific index 2> sourcetype = <dataspecific sourcetype 2> _TCP_ROUTING = qs_cluster   to be honest i could just create a second stanza thats a little different and still reads the same file, but i dont want two tailreader on the same file. What possibilities do i have? Thanks in advance
Hello @loganramirez, Can you confirm if the user trying to schedule a PDF is having the list_settings capability enabled on the role? As mentioned in the following doc, list_settings capability is r... See more...
Hello @loganramirez, Can you confirm if the user trying to schedule a PDF is having the list_settings capability enabled on the role? As mentioned in the following doc, list_settings capability is required to have the menu option populated. Doc - https://docs.splunk.com/Documentation/Splunk/9.3.0/Viz/DashboardPDFs#Schedule_PDF_delivery    Thanks, Tejas.   --- If the above solution works, an upvote is appreciated !!
Hey, were you able to find the resolution on this?
meet the same problem, looks Outlier Chart does not support dill down officially, or maybe need secondly development.
thanks for your guideline, but it does not work on the latest splunk, seems there need some change outlier_viz_drilldown.js to adapt the latest splunk version? Can you tell how to drill down to anothe... See more...
thanks for your guideline, but it does not work on the latest splunk, seems there need some change outlier_viz_drilldown.js to adapt the latest splunk version? Can you tell how to drill down to another dashboard? and the eval isOutlier should be | eval isOutlier=if('residual' < lowerBound OR 'residual' > upperBound, 1, 0)