All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Both @livehybrid and @richgalloway 's solutions are OK but the question is what problem are you actually trying to solve. It's relatively unlikely that you have - let's say - 8k or 9k characters long... See more...
Both @livehybrid and @richgalloway 's solutions are OK but the question is what problem are you actually trying to solve. It's relatively unlikely that you have - let's say - 8k or 9k characters long events which are perfectly "ok" and suddenly when the event hits the 10k limit the event is "worthless" for you so you're dropping it. It doesn't make much sense since the hard threshold of the data size doesn't seem to be a reasonable way of differentiating between different types of data. I'd be hard pressed to find a scenario where this actually makes sense instead of checking the data syntactically. BTW, Splunk operates on characters, not bytes so while TRUNCATE indeed cuts to the "about" given size in bytes, the len() functions returns number of code points (not even characters! It might differ in some scripts using composite characters) instead of bytes.
Ahh... right. If you change the license type, that might indeed cause "strange" behaviour since different license types normally don't stack and may enable different features. Hence the restart.
Hi @Piyush_Sharma37  Increase the maximum upload size limit in your Splunk Enterprise configuration. Navigate to $SPLUNK_HOME/etc/system/local/ on your Splunk server. Create or edit the web.conf ... See more...
Hi @Piyush_Sharma37  Increase the maximum upload size limit in your Splunk Enterprise configuration. Navigate to $SPLUNK_HOME/etc/system/local/ on your Splunk server. Create or edit the web.conf file. Add or modify the [settings] stanza to include max_upload_size:   [settings] max_upload_size = 2048 Set to a value in MB larger than the app file size, e.g., 2048 for 2GB   Save the web.conf file. Restart Splunk Enterprise for the changes to take effect. Attempt the installation of the "Python for Scientific Computing" app again through the UI. Splunk has a default limit on the size of apps that can be uploaded via the web interface. The "Python for Scientific Computing" app package is often larger than this default limit, causing the "file size is too big" error. Increasing the max_upload_size parameter in web.conf allows Splunk to accept larger app files during installation. Ensure you have sufficient disk space on the Splunk server where the app will be installed and unpacked. Restarting Splunk is mandatory for the configuration change to be applied. web.conf documentation You can also install it from the command line using: ./splunk install app <path/packagename> Depending on your architecture and configuration it may be that you need to install this via your Splunk Deployment server rather than a manual install.  Please review the installation docs for more information: https://docs.splunk.com/Documentation/MLApp/5.5.0/User/Installandconfigure  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @Snorre  The license files are XML files inside, so if you have a look at the contents of the file in the license directory you might be able to clarify which one you applied if unsure. They each... See more...
Hi @Snorre  The license files are XML files inside, so if you have a look at the contents of the file in the license directory you might be able to clarify which one you applied if unsure. They each have a unique signature (amongst other things) inside the file. Any text edit should work for viewing them.      Did this answer help you? If so, please consider: Adding kudos to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing.
@Piyush_Sharma37  Splunk has a default maximum upload size of 500MB for files uploaded via the web interface. You can increase this limit by editing the web.conf file. Navigate to the web.conf file... See more...
@Piyush_Sharma37  Splunk has a default maximum upload size of 500MB for files uploaded via the web interface. You can increase this limit by editing the web.conf file. Navigate to the web.conf file in your Splunk installation directory (usually found in C:\Program Files\Splunk\etc\system\local). Add or modify the following line under the [settings] stanza: [settings] max_upload_size = 1000   Save the file and restart Splunk.    web.conf - Splunk Documentation   Manual Install: If increasing the upload limit doesn’t work or you prefer a direct approach, manually install the PSC add-on.  Download the PSC add-on (.tar.gz file) from Splunkbase.  Extract the .tar.gz file to $SPLUNK_HOME/etc/apps/ (e.g., C:\Program Files\Splunk\etc\apps\). Ensure the extracted folder is named appropriately. Restart Splunk. Verify the installation in Splunk Web under Apps > Manage Apps; PSC should appear in the list.
hello everyone, myself Piyush and I'm new to this Splunk environment. I was getting along with MLTK and Python for scientific computing to develop something for the ongoing Splunk hackathon, but whil... See more...
hello everyone, myself Piyush and I'm new to this Splunk environment. I was getting along with MLTK and Python for scientific computing to develop something for the ongoing Splunk hackathon, but while I have tried several times to install it, it still shows me an XML screen saying the file size is too big. I even deleted and re-downloaded the Python file and uploaded it. However, the issue still persists and while other add ons like MLTK,etc got installed just fine. I'm on Windows and I don't have a clue how to move forward from here as I am learning about the splunk environment on the go.
Thanks for getting back to me. I started to look into the /etc/licenses folder and I toyed around with the files there, and now I think I figured out what is happening: If I install a Dev key in the... See more...
Thanks for getting back to me. I started to look into the /etc/licenses folder and I toyed around with the files there, and now I think I figured out what is happening: If I install a Dev key in the Prod environment, splunk deletes all Prod keys in the folder and creates a Restart required message in the dashboard. After restart only the files installed after the dev key is loaded.  I might very well have installed a new dev key in the prod environment as I received a renewal keys for both prod and dev in the same email. We will ask the maintenance team for a restore of the files in the licenses folder and it will probably be sorted. It would be great if splunk could show a warning when I try to do such stupid things as uploading a dev license in the prod environment, or maybe even a backup of the license files when deleting them, but I have learned now and wont be doing that again  
Normally the licenses shouldn't "disappear" on their own. Even when licenses expire, they still show as expired. The licenses are backed by files in $SPLUNK_HOME/etc/licenses so if they "disappeared"... See more...
Normally the licenses shouldn't "disappear" on their own. Even when licenses expire, they still show as expired. The licenses are backed by files in $SPLUNK_HOME/etc/licenses so if they "disappeared" someone must have deleted them. Check your backups for contents of this directory.
Installing an app on SH tier doesn't directly install the same app on indexers. Parts of apps are pushed to indexers as knowledge bundle. Anyway, back to the original question (which is a bit dated)... See more...
Installing an app on SH tier doesn't directly install the same app on indexers. Parts of apps are pushed to indexers as knowledge bundle. Anyway, back to the original question (which is a bit dated) - if you have an automatic lookup defined, you must have a lookup to back it. All "solutions" in this thread do not limit the scope of the lookup to a single SH but rather distribute the lookup across the whole environment.
I added a 30gb renewal license key that is valid from June 14th later this year. Afterward I got a message telling me to restart Splunk. I did that, and now all other licenses are missing from the Li... See more...
I added a 30gb renewal license key that is valid from June 14th later this year. Afterward I got a message telling me to restart Splunk. I did that, and now all other licenses are missing from the License admin console. Anyone experienced this before? Is there a way to recover the old licenses? Running splunk Enterprise 9.2.0.1 on prem on redhat  
While both approaches (foreach and transpose) should get you what you want, they might not have very good performance. Since "we're using first row as column names" I'm wondering if it wouldn't be e... See more...
While both approaches (foreach and transpose) should get you what you want, they might not have very good performance. Since "we're using first row as column names" I'm wondering if it wouldn't be easier if you didn't pull the data directly to Splunk but rather wrote them to a CSV file and ingested that file with indexed extractions (yes, that's often not the best way either but in this case it might be better).
Log1 (dataset1) Splunk Query :- index=xyz X_App_ID=abc API_NAME=abc_123 NOT externalURL Output of splunk Query :- 2025-05-01 04:54:57.335 : X-Correlation-ID=1234-acbd : X-App-ID=abc : X-Client-ID=... See more...
Log1 (dataset1) Splunk Query :- index=xyz X_App_ID=abc API_NAME=abc_123 NOT externalURL Output of splunk Query :- 2025-05-01 04:54:57.335 : X-Correlation-ID=1234-acbd : X-App-ID=abc : X-Client-ID=kjzoAHK7Bt2vnV5jLQIUuKQZDaXqtJJK : X-Client-Version=6.0.0.3627 : X-Workflow= : serviceType= : API_NAME=abc_123 : COMPLETE_URL=<URL> : Client_IP=<IP>: ApiName=abc_123 : StatusCode=200 : ExecutionTime=234 : Brand=abc_345 : Response={JASON response} Log2 (dataset2) Spluk Query :- index=xyz "xmlResponseMapping" Output of splunk Query :- 2025-05-01 04:54:57.335 : X-Correlation-ID=1234-acbd : xmlResponseMapping : accountType=null, accountSubType=null, Dataset1 and Dataset2 are connected using "X-Correlation-ID" only and dataset2 has more than 3000K logs for last 8 hrs,while dataset1 has 20-21K logs for last 8hrs I want "accountType" and "accountSubType" from dataset2 for X-Correlation-ID=<alpha-numeric> where X-App-ID=abc from dataset1. Dataset2 is having data for multiple "X-App-ID" but doesnt have field "X-App-ID" in logs. If I try below query then it is giving me output of 3000K (all from dataset2) index=masapi (X_App_ID=ScamShield API_NAME=COMBINED_FEATURES NOT externalURL) OR (napResponseMapping) |stats values(accountType) as accountType values(accountSubType) as accountSubType by X_Correlation_ID Kindly suggest the better way.
hi @Cheng2Ready , if you need help, open a new post so more people in Community will be able to help you. Anyway, start checking what's the condition that fails: if the lookup or the weekday, and t... See more...
hi @Cheng2Ready , if you need help, open a new post so more people in Community will be able to help you. Anyway, start checking what's the condition that fails: if the lookup or the weekday, and then check if it fails every time or some times, and if sometimes, when, As secondary test, check if it's a border condition: e.g. if the event has timestamp at 23:59:59 or 00:00:00. Ciao. Giuseppe
As a versatile alternative, you can use transpose.  Using the same lookup example as @livehybrid does, this is how to transform these extended mock data F1 F2 F3 Hello World Test Some ... See more...
As a versatile alternative, you can use transpose.  Using the same lookup example as @livehybrid does, this is how to transform these extended mock data F1 F2 F3 Hello World Test Some thing else into this form FieldName1Example FieldName2Example FieldName3Example Hello World Test Some thing else | transpose 0 | lookup fieldtest.csv fieldID as column | fields - column | transpose 0 header_field=fieldName | fields - column  
You have a couple of options. 1) If you have the permissions you can install a custom app that contains the csv in the app's lookups directory. The installation process will push the app to the sear... See more...
You have a couple of options. 1) If you have the permissions you can install a custom app that contains the csv in the app's lookups directory. The installation process will push the app to the searchheads as well as the indexers. That should resolve your "idx... lookup not found errors" 2) You can transition the lookup to a kvstore and configure replicate=true in transforms.conf  replicate = <boolean> * Indicates whether to replicate this collection on indexers. When false, this collection is not replicated on indexers, and lookups that depend on this collection are not available (although if you run a lookup command with 'local=true', local lookups are available). When true, this collection is replicated on indexers. * Default: false   However there are some default limitations regarding how many results can be returned for a search. Depending on the size of your lookup. But you can review those limitations here: https://docs.splunk.com/Documentation/Splunk/9.3.0/Admin/Limitsconf#.5Bkvstore.5D
stats is always the way to join datasets together, please remove join from your toolkit, it can always be replaced with a better option and is not the Splunk way to do things. It has numerous side ef... See more...
stats is always the way to join datasets together, please remove join from your toolkit, it can always be replaced with a better option and is not the Splunk way to do things. It has numerous side effects that can result in unexpected results, as you are seeing. @kamlesh_vaghela gives you an example of how to "join" using stats, but one other observation on your example is that you are using table which is a transforming Splunk command, so you should try to use this as late as possible in your SPL as this has consequences on where the data is maniuplated. If you are just looking to restrict the fields before an operation, use the fields command instead and note that in the stats example, you can still rename theX_Correlation_ID to ID after the stats command, which is a minor optimisation.  
Can you show your search, it seems that those numbers and warnings are the same as the example you gave - if that is what it is showing, then that is likely what the data contains. Can you show an ex... See more...
Can you show your search, it seems that those numbers and warnings are the same as the example you gave - if that is what it is showing, then that is likely what the data contains. Can you show an example of a couple of messages and your search because the search will work - note that you should not include the eval _raw part, as that is just setting up example test data to show you how the rest of the search can work
Tried need-props=true.  All it added was Servlet URI, EUM Request GUID, and ProcessID.   thanks  
Hi @JohnGregg  Its not clear from the docs - it doesnt look like there is much to add to the API call to tell it the bring back further detail, however I was wondering if you have need-props=true in... See more...
Hi @JohnGregg  Its not clear from the docs - it doesnt look like there is much to add to the API call to tell it the bring back further detail, however I was wondering if you have need-props=true in your existing API call? This may ad some further context which might help.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @yashb  Ive used INGEST_EVAL to achieve this for a customer previously, although as @richgalloway you may be able to achieve this with Ingest Actions too. Here is the sample props/transforms for... See more...
Hi @yashb  Ive used INGEST_EVAL to achieve this for a customer previously, although as @richgalloway you may be able to achieve this with Ingest Actions too. Here is the sample props/transforms for INGEST_EVAL == props.conf == [yourSourcetype] TRANSFORMS-dropBigEvents = dropBigEvents == transforms.conf == [dropBigEvents] INGEST_EVAL = queue=IF(len(_raw)>=10000,"nullQueue",queue) You could also achieve this with a regex match, however I think this would be resource intensive, so personally would use the INGEST_EVAL route, but including this for completeness. [dropBigEvents] REGEX = ^.{10000,} DEST_KEY = queue FORMAT = nullQueue    Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing