All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Here are a couple of links to other posts here https://community.splunk.com/t5/Getting-Data-In/JSON-field-extraction-beyond-5000-chars/m-p/549963#:~:text=The%20auto%2Dfield%2Dextraction%20stops,resu... See more...
Here are a couple of links to other posts here https://community.splunk.com/t5/Getting-Data-In/JSON-field-extraction-beyond-5000-chars/m-p/549963#:~:text=The%20auto%2Dfield%2Dextraction%20stops,result%20list%20after%20a%20search. https://community.splunk.com/t5/Getting-Data-In/Missing-events-JSON-payload-and-indexed-extractions/m-p/489113 If you start changing limits.conf - which is not simple with Cloud, it will affect general settings, so is not always the best way to go. If you have a field that is not extracted and it's a simple field - i.e. a single value inside a JSON object with no multivalue component then a simple calculated field can work, i.e. | eval type=spath(_raw, "data.tree.fruit.type") In the conf just use the spath... part for the eval definition
Hi @JoshuaJJ  Make sure that the file has execute chmod settings so that it can be executed without prepending with "bash" (for example) chmod +x <yourSHFile) Please let me know how you get on and... See more...
Hi @JoshuaJJ  Make sure that the file has execute chmod settings so that it can be executed without prepending with "bash" (for example) chmod +x <yourSHFile) Please let me know how you get on and consider adding karma to this or any other answer if it has helped. Regards Will
Hi @clightburn1  Unfortunately as you have found, there is currently no ARM release for the Splunk UF package on Windows 11, the only available packages for Windows 11 are x86_64. There is no publi... See more...
Hi @clightburn1  Unfortunately as you have found, there is currently no ARM release for the Splunk UF package on Windows 11, the only available packages for Windows 11 are x86_64. There is no public information available about any potential releases of ARM based UF for Windows 11, however if you speak to your account team they may be able to find more information about IF/when this might become available and can often get you setup with Beta access if it does become available. Note that these sorts of things have historically been covered by Non-Disclosure Agreements (NDAs)  Check out https://docs.splunk.com/Documentation/Splunk/latest/Installation/Systemrequirements for a full mapping of supported/available OS and Arch combinations.  Please let me know how you get on and consider adding karma to this or any other answer if it has helped. Regards Will
searchmatch is a somewhat odd command in that it is looking at the :"EVENT" i.e. it must have a _raw. If you run this | makeresults | eval _raw="ONE TWO THREE" | eval result=if(searchmatch("THREE T... See more...
searchmatch is a somewhat odd command in that it is looking at the :"EVENT" i.e. it must have a _raw. If you run this | makeresults | eval _raw="ONE TWO THREE" | eval result=if(searchmatch("THREE TWO"), 1, 0) You will see result=1, but if you run | makeresults | eval _raw="ONE TWO THREE" | eval result=if(searchmatch("THREEX TWO"), 1, 0) You will see result=0 Also if you run  | makeresults | eval fieldstring="ONE TWO THREE" | eval result=if(searchmatch("XX YY"), 1, 0) You will also see result=1 - odd - but that's the way it seems to handle a null _raw field. I am not sure why it finds a result when _raw is not present. Note the example given in the documentation, which further confuses https://docs.splunk.com/Documentation/Splunk/9.4.1/SearchReference/ConditionalFunctions#searchmatch.28.26lt.3Bsearch_str.26gt.3B.29 | makeresults 1 | eval _raw = "x=hi y=bye" | eval x="hi" | eval y="bye" | eval test=if(searchmatch("x=hi y=*"), "yes", "no") | table _raw test x y If you set _raw to be "x=low..." then the match will fail, so in this case, it's comparing the match against the specific field x where it has a value different to the _raw content. Anyway, your example sets a specific single string to be a fixed value, so if you do this | makeresults | eval fieldstring="ONE TWO THREE" | eval result=if(searchmatch("fieldstring=\"ONE TWO THREE\""), 1, 0) You will get a correct match, but if you change the match text, it will give you result=0. Hope I've not managed to confuse you too much!    
Hi @rksharma2808  Check out http://github.com/splunk/splunk-ansible which is a whole set of Ansible playbooks for Splunk.  Are there particular tasks you are wanting to carry out, or particular tec... See more...
Hi @rksharma2808  Check out http://github.com/splunk/splunk-ansible which is a whole set of Ansible playbooks for Splunk.  Are there particular tasks you are wanting to carry out, or particular technologies already in place? Also, what type of servers (and roles of these servers) are you interested in running this against? Please let me know how you get on and consider adding karma to this or any other answer if it has helped. Regards Will
Hi @pm771  The searchmatch command is applying the parameter you pass it as if was in the original search, so "TWO THREE" is like "index=test TWO THREE" which is the same as "index=test THREE TWO" i... See more...
Hi @pm771  The searchmatch command is applying the parameter you pass it as if was in the original search, so "TWO THREE" is like "index=test TWO THREE" which is the same as "index=test THREE TWO" in SPL terms. (Like you said, its doing an AND). If you want to search literally for "TWO THREE" then you need to do this: | eval match=IF(searchmatch("\"TWO THREE\""),1,0) which is to add a set of escaped quotes around the text, this would be like running the below, if you follow what I mean? index=test "TWO THREE" Here are some comparisons that might help:: | makeresults | eval _raw="ONE TWO THREE FOUR" | eval match1=IF(searchmatch("TWO THREE"),1,0) | eval match2=IF(searchmatch("THREE TWO"),1,0) | eval match3=IF(searchmatch("SIX"),1,0) | eval match4=IF(searchmatch("\"TWO THREE\""),1,0) | eval match5=IF(searchmatch("\"THREE TWO\""),1,0) Please let me know how you get on and consider adding karma to this or any other answer if it has helped. Regards Will
The JSON objects are very large and way over 5KB in size. I will look into calculated fields unless anyone else has a better suggestion.
Hi @livehybrid, Thanks a lot for your consideration. I have been going through some jetty related posts (ring/ring-jetty-adapter/src/ring/adapter/jetty.clj at cefb95e698eeb8c58a082ddb2eec6fb9958506... See more...
Hi @livehybrid, Thanks a lot for your consideration. I have been going through some jetty related posts (ring/ring-jetty-adapter/src/ring/adapter/jetty.clj at cefb95e698eeb8c58a082ddb2eec6fb9958506cb · ring-clojure/ring) in regard to this issue as it is the webserver running the controller. I found out that this is not a real issue with jetty, but rather, it is the default behavior. But luckily, it has a workaround. After doing some research, below is the workaround: This is not a permanent solution as the below changes will revert whenever jetty is upgraded, but it temporarily solves the problem.    $ cd /opt/appdynamics/platform/product/controller/appserver/jetty/etc $ cat jetty-ssl.xml <?xml version="1.0"?><!DOCTYPE Configure PUBLIC "-//Jetty//Configure//EN" "http://www.eclipse.org/jetty/configure_9_3.dtd"> <!-- ============================================================= --> <!-- Base SSL configuration --> <!-- This configuration needs to be used together with 1 or more --> <!-- of jetty-https.xml or jetty-http2.xml --> <!-- ============================================================= --> <Configure id="Server" class="org.eclipse.jetty.server.Server"> <!-- =========================================================== --> <!-- Create a TLS specific HttpConfiguration based on the --> <!-- common HttpConfiguration defined in jetty.xml --> <!-- Add a SecureRequestCustomizer to extract certificate and --> <!-- session information --> <!-- =========================================================== --> <New id="sslHttpConfig" class="org.eclipse.jetty.server.HttpConfiguration"> <Arg><Ref refid="httpConfig"/></Arg> <Call name="addCustomizer"> <Arg> <New class="org.eclipse.jetty.server.SecureRequestCustomizer"> <Arg name="sniRequired" type="boolean"><Property name="jetty.ssl.sniRequired" default="false"/></Arg> <Arg name="sniHostCheck" type="boolean"><Property name="jetty.ssl.sniHostCheck" default="true"/></Arg> <Arg name="stsMaxAgeSeconds" type="int"><Property name="jetty.ssl.stsMaxAgeSeconds" default="-1"/></Arg> <Arg name="stsIncludeSubdomains" type="boolean"><Property name="jetty.ssl.stsIncludeSubdomains" default="false"/></Arg> </New> </Arg> </Call> </New> </Configure>   In the example above (jetty-ssl.xml file), the default value for jetty.ssl.sniHostCheck is "true". This value has to be changed to default="false" to bypass the sniHostCheck.   <New id="sslHttpConfig" class="org.eclipse.jetty.server.HttpConfiguration"> <Arg><Ref refid="httpConfig"/></Arg> <Call name="addCustomizer"> <Arg> <New class="org.eclipse.jetty.server.SecureRequestCustomizer"> <!-- output truncated --> <Arg name="sniHostCheck" type="boolean"><Property name="jetty.ssl.sniHostCheck" default="false"/></Arg> <!-- output truncated --> </New> </Arg> </Call> </New>   You may also need to change it in jetty-ssl.xml.j2 file Then, you have to restart the Controller AppServer. After the controller AppServer restart is completed, you will be able to access the AppDynamics Controller via https://<controller_ip_addr>:8181
You made a point of emphasizing the different test_id (sane test_name BUT a different test_ID)  Is it possible to have a row with the SAME test_id at some point, i.e. could you insert a row at row 3... See more...
You made a point of emphasizing the different test_id (sane test_name BUT a different test_ID)  Is it possible to have a row with the SAME test_id at some point, i.e. could you insert a row at row 3 with test_name=test 1 and test_id="0.98"? If so, the simple streamstats solution suggested by @livehybrid won't work. Is it possible to have the same test_id and if so, what should be the behaviour?
We use Enterprise Splunk  Version: 9.1.6 I have noticed a strange behavior of searchmatch() function.   | makeresults | eval fieldstring="ONE TWO THREE" | eval result=if(searchmatch("THREE TWO"), ... See more...
We use Enterprise Splunk  Version: 9.1.6 I have noticed a strange behavior of searchmatch() function.   | makeresults | eval fieldstring="ONE TWO THREE" | eval result=if(searchmatch("THREE TWO"), 1, 0)   After the run result equals to 1. Why is it not looking for complete literal  string and performing "THREE" AND "TWO" instead?  
As the computer laptop field continues to grow the use of ARM based chips for Windows 11, is there an ETA on a Splunk Forwarder agent for this chipset?
Team am looking for some suggestions or insights Patch Automation  through Ansible , Terraform     
It's likely that your auto extracted JSON fields are not extracting the entire object, i.e. if you search type=* and it does not find some values, then those values do not exist in that field in the ... See more...
It's likely that your auto extracted JSON fields are not extracting the entire object, i.e. if you search type=* and it does not find some values, then those values do not exist in that field in the auto extracted field. The fact that they DO give results after the spath, indicates this. What is the size of your JSON object. By default I believe it will only auto extract the first 5000 (5k?) bytes of a JSON object, so if you show "raw" in your display rather than the syntax highlighted view of the JSON, you can see where your fruit type field exists in the raw. If this is the case, then you can add some calculated fields using spath eval statement to extract the fields, so they are always present before the search is run. BTW, I'm not totally sure of the best practice way to manage this 5k limit, but the above will work.
Hi @lcguilfoil  Change your "splunk.events" to "splunk.table"  So its like this example: "viz_URfuD3f4": { "containerOptions": {}, "context": {}, "dataSources":... See more...
Hi @lcguilfoil  Change your "splunk.events" to "splunk.table"  So its like this example: "viz_URfuD3f4": { "containerOptions": {}, "context": {}, "dataSources": { "primary": "ds_0zCzRLMd" }, "options": {}, "showLastUpdated": false, "showProgressBar": false, "type": "splunk.table" }   Please let me know how you get on and consider adding karma to this or any other answer if it has helped. Regards Will
Thanks for the quick reply. Sorry, I am not that familiar with the architecture. Splunk sends the data from the indexers(hot tier) to S3. Is S3 what you mean by Smartstore below? S3 is considered war... See more...
Thanks for the quick reply. Sorry, I am not that familiar with the architecture. Splunk sends the data from the indexers(hot tier) to S3. Is S3 what you mean by Smartstore below? S3 is considered warm according to the diagram here. You would then be able to use the coldtoFrozen script on the older data to send it from on prem S3 to AWS? So it goes to AWS just not by tiering it.
Hi @Osama_Abbas1  Have you configured a APPDYNAMICS_CONTROLLER_HOST_NAME variable when running AppD? If so, is this the IP or hostname for your install?  IP addresses cannot be used with SSL Certif... See more...
Hi @Osama_Abbas1  Have you configured a APPDYNAMICS_CONTROLLER_HOST_NAME variable when running AppD? If so, is this the IP or hostname for your install?  IP addresses cannot be used with SSL Certificate SNI, which explains the error, although I would have expected just a browser warning. This makes me wonder, are you connecting via a proxy from your client to your AppD server? This could be trying to generate an SSL cert for the connection and failing. Worth reading: https://docs.appdynamics.com/appd/23.x/latest/en/application-monitoring/install-app-server-agents/java-agent/administer-the-java-agent/java-agent-configuration-properties#:~:text=Required%3A%20Yes-,Controller%20Host,-The%20hostname%20or https://docs.appdynamics.com/appd/onprem/23.x/23.6/en/secure-the-platform/controller-ssl-and-certificates Please let me know how you get on and consider adding karma to this or any other answer if it has helped. Regards Will
Hi @ra__22  The TRANSFORMS is done as part of the typingQueue parsing process when data is ingested into Splunk. The data only goes through that once, subsequent changes to the sourcetype will not t... See more...
Hi @ra__22  The TRANSFORMS is done as part of the typingQueue parsing process when data is ingested into Splunk. The data only goes through that once, subsequent changes to the sourcetype will not trigger re-evaluation of props.conf rules for the new sourcetype. The EVAL statements are run at search-time, so these will apply to the new sourcetype when you search the data, which is why you are seeing the eval fields working. To fix your issue with "eliminate_unwanted_data" not running, try moving this transform call to the original sourcetype name, perhaps run it before you change the sourcetype to remove ambiguity. Please let me know how you get on and consider adding karma to this or any other answer if it has helped. Regards Will
The only way to do that in a supported fashion is to put warm data in SmartStore and frozen data in AWS.  Frozen data can be put anywhere you wish using coldToFrozenScript.  Splunk doesn't care (or k... See more...
The only way to do that in a supported fashion is to put warm data in SmartStore and frozen data in AWS.  Frozen data can be put anywhere you wish using coldToFrozenScript.  Splunk doesn't care (or know) about your frozen data.
Hi all, My customer would like to use Smartstore with on prem S3 storage(Storagegrid) and then tier the older data(after 3 years) to AWS vis Storagegrid's Cloud Storage Pools. Is this supported? TI... See more...
Hi all, My customer would like to use Smartstore with on prem S3 storage(Storagegrid) and then tier the older data(after 3 years) to AWS vis Storagegrid's Cloud Storage Pools. Is this supported? TIA, Frank
Hello,  I have a bash script that basically creates a cronjob. Not sure if this is allowed or not but I am able to execute it just fine when logged into the splunkfwd account on the UF. However, whe... See more...
Hello,  I have a bash script that basically creates a cronjob. Not sure if this is allowed or not but I am able to execute it just fine when logged into the splunkfwd account on the UF. However, when ExecProc tries to execute it gives me a permission denied.  (The below app is deployed via a Deployment Server) Sample Script (something simple, trying to get it to work first before i build in my if/then statements)  #!/bin/bash # List the current user's crontab and save output to /tmp/cron_out echo "* * * * * testing_this_out" | crontab - > /tmp/cron_out 2>&1   inputs.conf  [script://./bin/install_cron.sh] disabled = false interval = 10    sourcetype = cron_upgrader index = splunk_upgrade   App Structure  /opt/splunkforwarder/etc/apps/<app_name>/ ...........bin > install_cron.sh  ...........local > inputs.conf   Not sure, but I am pretty sure Splunk restricts what can be executed since if I manually execute the script, it works fine.