All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hello @ccampbell, The reason you are not able to upgrade the app to the latest version is that the app is not listed to be compatible with SplunkCloud platform on Splunkbase. If you wish to have the... See more...
Hello @ccampbell, The reason you are not able to upgrade the app to the latest version is that the app is not listed to be compatible with SplunkCloud platform on Splunkbase. If you wish to have the updated package available for upgrade on SplunkCloud platform, you'll need to have the developer of the app update the platform compatibility details. Please refer to the following screenshot displaying platform compatibility for the app.   I assume that the app inspect vetting would have failed for the app package and hence platform compatibility for SplunkCloud would be missing. As a solution to this, you can download the app package from Splunkbase, modify the app_id (app.conf and the folder name, and anywhere else within the package if used), repackage the app and upload it as a private app. This approach is not recommended since it doesn't allow you to track and stay updated with the future updates. Additionally, while uploading the private app, if the app inspect fails, you'll need to work on fixing the errors and repackage the app again and vet it continuously until it succeeds the vetting process. The best approach would be to have the developer engaged and make the app compatible to SplunkCloud platform.   Thanks, Tejas   --- If the above solution helps, an upvote is appreciated.!!  
Hello @shai, In this scenario, you'll need to combine your certs along with SplunkCloud certificates. Just append the CA file to include self signed certificate and SplunkCloud rootCA and use the sa... See more...
Hello @shai, In this scenario, you'll need to combine your certs along with SplunkCloud certificates. Just append the CA file to include self signed certificate and SplunkCloud rootCA and use the same for communication. This chain will help you communicate with both Splunk Enterprise On-Prem and SplunkCloud environment.   Thanks, Tejas.   --- The above solution helps, an upvote is appreciated.!! 
Hi @TheEggi98 , good for you, see next time! Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated
Alright Thank you i will use sourcetype and index overriding and then make the data of the newly added available for our qs cluster to build dashboards
We have below data in json format, i need help with a custom json response handler so splunk can break every event separately.  Each event starts with the record_id { "eventData": [ { "record_id"... See more...
We have below data in json format, i need help with a custom json response handler so splunk can break every event separately.  Each event starts with the record_id { "eventData": [ { "record_id": "19643", "eventID": "1179923", "loginID": "PLI", "userDN": "cn=564SD21FS8DF32A1D87FAD1F,cn=Users,dc=us,dc=oracle,dc=com", "type": "CredentialValidation", "ipAddress": "w.w.w.w", "status": "success", "accessTime": "2024-08-29T06:23:03.487Z", "oooppd": "5648sd1csd-952f-d630a41c87ed-000a3e2d", "attributekey": "User-Agent", "attributevalue": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/128.0.0.0 Safari/537.36" }, { "record_id": "19644", "eventID": "1179924", "loginID": "OKP", "userDN": "cn=54S6DF45S212XCV6S8DF7,cn=Users,dc=us,dc=CVGH,dc=com", "type": "Logout", "ipAddress": "X.X.X.X", "status": "success", "accessTime": "2024-08-29T06:24:05.040Z", "oooppd": "54678S3D2FS962SDFV3246S8DF", "attributekey": "User-Agent", "attributevalue": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/128.0.0.0 Safari/537.36" } ] }
Hi @TheEggi98 , if the file to read is always the same in both inputs, Splunk doesn't read twice a file and the solution is the second one I described (overriding). If instead you have different fi... See more...
Hi @TheEggi98 , if the file to read is always the same in both inputs, Splunk doesn't read twice a file and the solution is the second one I described (overriding). If instead you have different files in the same path to read in the two inputs, you can specify in the input stanza the different file name to read also using the same path. Ciao. Giuseppe
Hi @gcusello  thanks for the fast response. if im not wrong i theoretically could bypass the precedence by doing this (at least btool dont complain) but i will not do that [monitor://<path to lo... See more...
Hi @gcusello  thanks for the fast response. if im not wrong i theoretically could bypass the precedence by doing this (at least btool dont complain) but i will not do that [monitor://<path to logfile>.log] ... [monitor://<path to same logfile>.lo*] ...   When overriding sourcetype and index on the indexer, am i able to route data of the second sourcetype to our qs cluster to build dashboards?
Hi @TheEggi98 , you cannot read the same files in two input stanzas, ony one (by precedence rules) will be used. If in the same path, you have to read different files for each input, you can specif... See more...
Hi @TheEggi98 , you cannot read the same files in two input stanzas, ony one (by precedence rules) will be used. If in the same path, you have to read different files for each input, you can specify in the stanzas the correct file to read. If instead data are in the same file, the only solution is to read it with one input stanza and then override index and eventually sourcetype values on the Indexers or (if present) on Heavy Forwarders, following the instructions at  for sourcetype https://docs.splunk.com/Documentation/SplunkCloud/8.2.2203/Data/Advancedsourcetypeoverrides?_gl=1*4u2o7n*_gcl_au*NTk4MTY5Ny4xNzI0ODM2ODI0*FPAU*NTk4MTY5Ny4xNzI0ODM2ODI0*_ga*MTY2Mzg1NDI2Mi4xNzI0ODM2ODI0*_ga_5EPM2P39FV*MTcyNDkxNDM3OC41LjEuMTcyNDkxNTE2NS4wLjAuMTM4NTMxMDQ2NA..*_fplc*SVZreWQzalBQTTVYVjFvczZ3Sm45a3lBd1REUGtiV3c4bktjeDdzejliWm9NbEYlMkJ2Z2VGb2E4JTJCYzdsNld4QSUyQmJ0NnAwVTNKaU93OWJGbk1uSmVBa0R3M3l4ZWcyNElnZFZISldBS0VlOSUyRmxycm00UUp5NXdDd2xXb3clMkJXQSUzRCUzRA.. and for index https://community.splunk.com/t5/Getting-Data-In/Route-data-to-index-based-on-host/td-p/10887?_gl=1*1079w7n*_gcl_au*NTk4MTY5Ny4xNzI0ODM2ODI0*FPAU*NTk4MTY5Ny4xNzI0ODM2ODI0*_ga*MTY2Mzg1NDI2Mi4xNzI0ODM2ODI0*_ga_5EPM2P39FV*MTcyNDkxNDM3OC41LjEuMTcyNDkxNTIxNS4wLjAuMTM4NTMxMDQ2NA..*_fplc*M29uUHdZbnRsT3VuRlgxaktXenFld01sdDJ1dXVqcFExYTRDSFF5OTZRd2pTRmFzRGE4OU11YUZaS0dtdG5iSWNuckRxTGRFT2l4cDVrZDlQTnNLUTFEOVVIemRxQyUyQjhyTmpvJTJGeUZ5bUs1Vng2eWtkMUxWcEpSdDFQWEtZQSUzRCUzRA.. Ciao. Giuseppe
Try something like this | eval root=mvjoin(mvindex(split(policy,"_"),0,1),"_") | eval version=mvindex(split(policy,"_"),2) | timechart span=48h values(version) as version by root | eval date=if(_tim... See more...
Try something like this | eval root=mvjoin(mvindex(split(policy,"_"),0,1),"_") | eval version=mvindex(split(policy,"_"),2) | timechart span=48h values(version) as version by root | eval date=if(_time < relative_time(now(),"-2d"), "Last 48 Hours", "Today") | fields - _time _span | transpose 0 header_field=date column_name=policy | eval "New version"=if('Last 48 Hours' == Today, null(), Today)
Hi @btheneghan , if you already extracted the field manual_entry and the format is always the one you descripted in your samples, you could use this regex: | rex field=manual_entry "^\#\d+\s(?<manu... See more...
Hi @btheneghan , if you already extracted the field manual_entry and the format is always the one you descripted in your samples, you could use this regex: | rex field=manual_entry "^\#\d+\s(?<manual_entry>.*)" if you didn't extracted the field manual_entry and the format is always the one you descripted in your samples, you could use: | rex "^\#\d+\s(?<manual_entry>.*)"  Ciao. Giuseppe
Hi there, i have a file monitoring stanza on a universal forwarder where i filter using transforms.conf to only get logentries i need, because the server writes logentries of multiple business proce... See more...
Hi there, i have a file monitoring stanza on a universal forwarder where i filter using transforms.conf to only get logentries i need, because the server writes logentries of multiple business processes into the same logfile. Now i need entries of another process with different ACL in a different index from that logfile but in our QS cluster while the first datainput still ingests into our PROD cluster So i have my inputs.conf [monitor://<path_to_logfile>] disabled = 0 index = <dataspecific index 1> sourcetype = <dataspecific sourcetype 1> a props.conf [<dataspecific sourcetype 1>] SHOULD_LINEMERGE = true BREAK_ONLY_BEFORE_DATE = true TRUNCATE = 1500 TIME_PREFIX = ^ MAX_TIMESTAMP_LOOKAHEAD = 20 TIME_FORMAT = [%y/%m/%d %H:%M:%S] TRANSFORMS-set = setnull, setparsing and a transforms.conf [setnull] REGEX = . DEST_KEY = queue FORMAT = nullQueue [setparsing] REGEX = (<specific regex>) DEST_KEY = queue FORMAT = indexQueue As standalone Stanza i would need the new input like this, with its own setparsing transforms [monitor://<path_to_logfile>] disabled = 0 index = <dataspecific index 2> sourcetype = <dataspecific sourcetype 2> _TCP_ROUTING = qs_cluster   to be honest i could just create a second stanza thats a little different and still reads the same file, but i dont want two tailreader on the same file. What possibilities do i have? Thanks in advance
Hello @loganramirez, Can you confirm if the user trying to schedule a PDF is having the list_settings capability enabled on the role? As mentioned in the following doc, list_settings capability is r... See more...
Hello @loganramirez, Can you confirm if the user trying to schedule a PDF is having the list_settings capability enabled on the role? As mentioned in the following doc, list_settings capability is required to have the menu option populated. Doc - https://docs.splunk.com/Documentation/Splunk/9.3.0/Viz/DashboardPDFs#Schedule_PDF_delivery    Thanks, Tejas.   --- If the above solution works, an upvote is appreciated !!
Hey, were you able to find the resolution on this?
meet the same problem, looks Outlier Chart does not support dill down officially, or maybe need secondly development.
thanks for your guideline, but it does not work on the latest splunk, seems there need some change outlier_viz_drilldown.js to adapt the latest splunk version? Can you tell how to drill down to anothe... See more...
thanks for your guideline, but it does not work on the latest splunk, seems there need some change outlier_viz_drilldown.js to adapt the latest splunk version? Can you tell how to drill down to another dashboard? and the eval isOutlier should be | eval isOutlier=if('residual' < lowerBound OR 'residual' > upperBound, 1, 0)
Anyone who comes across this issue please up vote the following idea for a configuration option to disable INDEXED_EXTRACTIONS via an app's local props.conf. https://ideas.splunk.com/ideas/EID-I-2... See more...
Anyone who comes across this issue please up vote the following idea for a configuration option to disable INDEXED_EXTRACTIONS via an app's local props.conf. https://ideas.splunk.com/ideas/EID-I-2400
Anyone who comes across this issue please upvote the following idea for a configuration option to disable INDEXED_EXTRACTIONS via an app's local props.conf.   https://ideas.splunk.com/ideas/EID-I-... See more...
Anyone who comes across this issue please upvote the following idea for a configuration option to disable INDEXED_EXTRACTIONS via an app's local props.conf.   https://ideas.splunk.com/ideas/EID-I-2400
Here's the approach I would use.  It may not be the best way. Search the last 48 hours for the desired events Extract the Policy_Name field into Last_48_Hours_Policy_Names Extract the "root" poli... See more...
Here's the approach I would use.  It may not be the best way. Search the last 48 hours for the desired events Extract the Policy_Name field into Last_48_Hours_Policy_Names Extract the "root" policy name ("policy_n_") from Last_48_Hours_Policy_Names Append the search of today for the desired events Extract the Policy_Name field into Today_Policy_Names Extract the "root" policy name ("policy_n_") from Today_48_Hours_Policy_Names Regroup the results on the root policy name field Discard the root policy name field Compare Last_48_Hours_Policy_Names to Today_48_Hours_Policy_Names.  If different, set New_Policy_Names to Today_Policy_Names
There's probably more than one way to do that.  If you want to use rex then this should do it.  It just takes everything after the first space as the manual_entry field. | rex "\s(?<manual_entry>.*)... See more...
There's probably more than one way to do that.  If you want to use rex then this should do it.  It just takes everything after the first space as the manual_entry field. | rex "\s(?<manual_entry>.*)"  
I have never been one to understand regex, however I need to extract everything after the first entry (#172...) into it's own field.  Let's call it manual_entry.  I'm getting tired of searching and r... See more...
I have never been one to understand regex, however I need to extract everything after the first entry (#172...) into it's own field.  Let's call it manual_entry.  I'm getting tired of searching and randomly trying things. #1724872356 exit #1724872357 exit #1724872463 cat .bashrc #1724872485 sudo cat /etc/profile.d/join-timestamp-history.sh #1724872512 exit #1724877740 firefox   manual_entry exit exit cat .bashrc sudo cat /etc/profile.d/join-timestamp-history.sh exit firefox