All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi, I am facing an issue with the eval if condition. Please help.   index=main, source=ls.csv | eval new_field = if(error=200,"sc","cs",if(error=500,"css","ssc")) | table error new_field   ... See more...
Hi, I am facing an issue with the eval if condition. Please help.   index=main, source=ls.csv | eval new_field = if(error=200,"sc","cs",if(error=500,"css","ssc")) | table error new_field     Regards Suman P.
Dear All, I have created a TA to monitor a custom python script named log_parser_v1.py". Here is the configuration from /splunk/etc/apps/TA-logs/default/inputs.conf [script://./bi... See more...
Dear All, I have created a TA to monitor a custom python script named log_parser_v1.py". Here is the configuration from /splunk/etc/apps/TA-logs/default/inputs.conf [script://./bin/log_parser_v1.py] python.version = python3.9 interval = 300 disabled = false But while running TA got failed with the error "ModuleNotFoundError: No module named 'syslog'" So I am trying to debug with splunk cmd python, and it's throwing "ModuleNotFoundError: No module named 'syslog'" error - [ss@localhost bin]$ ./splunk cmd python log_parser_v1.py Traceback (most recent call last): File "bin/log_parser_v1.py", line 7, in <module> import syslog ModuleNotFoundError: No module named 'syslog' But the same script runs fine with the command python3.9 bin/log_parser_v1.py Here are the few lines from the script with the import statement of the module "syslog" in the line 7- [ss@localhost bin]$ cat log_parser_v1.py #!/usr/bin/env python import os, sys sys.path.append('/usr/bin/python3.9') sys.path.append('/usr/lib/python3.9/site-packages') sys.path.append('/usr/lib64/python3.9/site-packages') sys.path.append(os.path.dirname(os.path.abspath(__file__))) import json, logging, syslog, datetime, argparse, shutil, zipfile, tarfile, bz2, socket, sys, errno, time, gzip, hashlib from logging.handlers import SysLogHandler, SYSLOG_TCP_PORT from syslog import LOG_USER To use python3.9. I append the python3.9 package path in script but it still is not picking the syslog module. here is the python3.9 path - [ss@localhost bin]$ whereis python python: /usr/bin/python2.7 /usr/bin/python3.6 /usr/bin/python3.6m /usr/bin/python3.9 /usr/lib/python2.7 /usr/lib/python3.6 /usr/lib/python3.9 /usr/lib64/python2.7 /usr/lib64/python3.6 /usr/lib64/python3.9 /usr/include/python3.9 /usr/include/python2.7 /usr/include/python3.6m /usr/share/man/man1/python.1.gz I also tried to import syslog package with ./splunk cmd python, but it got failed [ss@localhost bin]$ ./splunk cmd python Python 3.7.11 (default, May 25 2022, 12:23:55) [GCC 9.1.0] on linux Type "help", "copyright", "credits" or "license" for more information. >>> import sys >>> import syslog Traceback (most recent call last): File "<stdin>", line 1, in <module> ModuleNotFoundError: No module named 'syslog' >>> exit() And here is imported successfully with python3.9 [ss@localhost bin]$ python3.9 Python 3.9.7 (default, Sep 13 2021, 08:18:39) [GCC 8.5.0 20210514 (Red Hat 8.5.0-3)] on linux Type "help", "copyright", "credits" or "license" for more information. >>> import syslog >>> exit() Guys, I am looking for your help to understand like what is missing. please help here.
Hi Community,   I have a use case where the client needs data to be stored over an extended period of time. That data powers the dashboard that uses datamodels to generate the panels. Since the cl... See more...
Hi Community,   I have a use case where the client needs data to be stored over an extended period of time. That data powers the dashboard that uses datamodels to generate the panels. Since the client wants data to be available for at least 6 months, the idea was to create an index that has hot/warm buckets in SSD and cold buckets in slower storage. I have two different issues here: I have implemented this setup in our test environment with mixed storage for hot and cold buckets. Is there a way for me to check where my data is being stored? Since my dashboards are all powered by datamodels, I have a question regarding the storage location and method of accelerated data. If the data is accelerated, does the data model summary folder store the complete accelerated data or will it have some pointers that point to the location where the data is actually present? The main problem here is that if we have mixed storage of SSD and HDD, and since all the dashboards are powered by datamodels how much will this affect the performance of Splunk? Will the time to load the dashboard be affected by such a storage model?   Regards, Pravin
I am trying to get a wildcard to work with a where clause. Not sure if I'm doing something wrong altogether or just missing some syntax but my search is as follows:   index=my_index | where descr... See more...
I am trying to get a wildcard to work with a where clause. Not sure if I'm doing something wrong altogether or just missing some syntax but my search is as follows:   index=my_index | where description=" Changed * role to * Admin"   basically looking up whether any user had their role changed to any admin role. Thought this would be an easy one, and it is not.
Does anyone know if it's possible to create a cluster for the deployment server or the master server? I´m asking this because we could go to DR more easily in case of datacenter change, tests or di... See more...
Does anyone know if it's possible to create a cluster for the deployment server or the master server? I´m asking this because we could go to DR more easily in case of datacenter change, tests or disasters. Also our deployment server is quite slow (we have more than 5000 universal forwarders). I think a deployment server cluster could solve this  issue. Anyone have any idea? Is it possible? 
Hi, One thing that doesn't seem to be documentet, is how Splunk handles Linux file permissions when files from the deployer is pushed to the search head cluster. Docs: https://docs.splunk.com/Docum... See more...
Hi, One thing that doesn't seem to be documentet, is how Splunk handles Linux file permissions when files from the deployer is pushed to the search head cluster. Docs: https://docs.splunk.com/Documentation/Splunk/9.0.2/DistSearch/PropagateSHCconfigurationchanges For example, I have an app "/opt/splunk/etc/shcluster/apps/my app". This app has a script under "/opt/splunk/etc/shcluster/apps/my app/bin/helloworld.sh". This script has the permission "-rwxr-x---" on the deployer, but if I push the script to the search head cluster it gets the permission "-rw-rw-r--" on the search head cluster members. Note that the executable permission is removed, making the script not usable. I'm using Splunk version 9.0.2 on both the deployer and the search head cluster members. Also, a colleague of mine is having the same problem, so I don't think is something wrong with my Splunk environment in particular. Is anyone else experiencing this problem, and is there a workaround?
Hi, I am using the following script in Splunk query. Here i am trying having multiple values in field AdditionalData and splitting them to extract the fields and writing to separate fields. Now i... See more...
Hi, I am using the following script in Splunk query. Here i am trying having multiple values in field AdditionalData and splitting them to extract the fields and writing to separate fields. Now if there is any blank value, in any of these extract fields, i want to have the default value appear as Not Available.  Thanks in advance | eval "AddtionalData"=if(isnotnull('cip:AuditMessage.ExtraData'),'cip:AuditMessage.ExtraData',"Not Available") | rex field=AddtionalData "Legal employer name:(?<LegalEmployerName>[^,]+)" | rex field=AddtionalData "Legal entity:(?<LegalEntity>[^,]+)" | rex field=AddtionalData "Country:(?<Country>[^,]+)" | rex field=AddtionalData "Business unit:(?<BusinessUnit>[^,]+)"
Hello everyone! I have recently started working with SPLUNK UI and I'm getting used to all the options it has. I guess my question may be a little dumb, but got stuck and I'm not finding any docum... See more...
Hello everyone! I have recently started working with SPLUNK UI and I'm getting used to all the options it has. I guess my question may be a little dumb, but got stuck and I'm not finding any documentation that helps me go through it. At the moment, I have my app runing and I have a Dashboard page where I'm able to load some Visualizations components. My issue comes at the time of importing the indexes where I have to read data from. Where can I find the location of those indexes and how should I state the import for reading the data on them? Any link to documentation regarding this or help would be highly appreciated.
Hello, I have a collection of logs (same source type) but some of them have different or additional fields. In order to figure out when they appear, I'm trying to create a Query that shows me which ... See more...
Hello, I have a collection of logs (same source type) but some of them have different or additional fields. In order to figure out when they appear, I'm trying to create a Query that shows me which fields are distinct after a specific time range. Let's say I have 200 events from 13:00 to 14:00. Now I want to group by stats values(*) results by creating timerangefields:   | eval timerange1=(13:00 to 13:15), timerange2=(13:15 to 13:30)   so I can use    |stats values(*) by timerange1, timerange2    I was considering using date_hour, date_minute etc.. but I think there must be an easier way as I would need addititional commands. Also I don't know the right format as I get everytime "Type checking failed. '-' only takes numbers. So do you have any suggestions how I could solve this? I'm thankful for any help   Kind regards Alex
I have data something like below.  msg: {       application: test-app      correlationid: 0.59680117.1667864418.7d2b8d5      httpmethod: GET      level: INFO      logMessage: {         apiNam... See more...
I have data something like below.  msg: {       application: test-app      correlationid: 0.59680117.1667864418.7d2b8d5      httpmethod: GET      level: INFO      logMessage: {         apiName: testApi        apiStatus: Success        clientId: testClientId1        error: NA        list_items: [          {             city: PHOENIX            countryCode: USA            locationId: dc5269a4-c043-4381-b757-63950feecac3            matchRank: 1            merchantName: testMerchant1            postalCode: 12345            state: AZ            streetAddress: 4000 E SKY HARBOR BLVD          }          {             city: PHOENIX            countryCode: USA            locationId: c7b97f03-b21b-4c11-aead-1ca3cd03d415            matchRank: 2            merchantName: testMerchant2            postalCode: 56789            state: AZ            streetAddress: 4000 E SKY HARBOR BL          }       ......     ] I have to get a table with clientId and locationId something like below  clientId                     locationId testClientId1         dc5269a4-c043-4381-b757-63950feecac3 testClientId1         c7b97f03-b21b-4c11-aead-1ca3cd03d415 What I tried is | base search | | table "msg.logMessage.clientId", "msg.logMessage.matched_locations{}.locationId"  which resulted in grouping the locationIds for clientId hence one row for even multiple locationIds clientId                     locationId testClientId1         dc5269a4-c043-4381-b757-63950feecac3                                     c7b97f03-b21b-4c11-aead-1ca3cd03d415 Any help is appreciated. 
Hi everyone, I am in the need to find a way to filter data that specific roles access inside an index. For example: Index=servers The index has servers from windows, linux, and ostype3 W... See more...
Hi everyone, I am in the need to find a way to filter data that specific roles access inside an index. For example: Index=servers The index has servers from windows, linux, and ostype3 We want to have the following: roleA has access to index=servers (but just sees windows servers) roleB has access to index=servers (but just see linux servers) roleC has access to index=servers (but just see ostype3 servers) This can be achieved by using search filters and it worked ok. However... If then, I have a role that can: RoleD has access to index=servers (but just see windows servers)  RoleD has access to index=firewalls This then will not work for roleD. RoleD will not be able to search for the index=firewalls, as the search filters takes precedence and limits the user just to see the data in: RoleD has access to index=servers (but just see windows servers)    So, I'm trying to find a new solution that can allow me to do what I need to, and summary index came to the idea. However I'm struggling with something. When my data is sent to the summary index, it's sourcetype is changed to stash. And then my data is not parsed as is in the original index. Lets suppose I change the sourcetype from stash to original sourcetype, that then will make me use a lot more license and double it up. So, that's why I'm asking here for help. What solutions do I have? Am I missing something or doing something wrong? Thanks in advance if someone can help me on this.  
Is there a way to edit note in a container via the api? if not is there any plan to expose this api in the future?
Hi, We are using Splunk add on  Splunk_TA_windows to capture CPU,Memory,Disk and other infrastructure log details.Through this add on we are getting cpu,memory,disk all other sourcetype in Splunk fo... See more...
Hi, We are using Splunk add on  Splunk_TA_windows to capture CPU,Memory,Disk and other infrastructure log details.Through this add on we are getting cpu,memory,disk all other sourcetype in Splunk for windows servers. But for only two of our windows server, except CPU & Memory other sourcetypes are being captured in Splunk.In inputs.conf of the add on  monitoring stanza in present for CPU and Memory Why we aren't receiving CPU and Memory sourcetype in those servers? How do we get those details as well?Please suggest   
Hi We have installed website monitoring app and added few URL"s to monitor but data was not updating properly as checked last checked was not latest time. is there any settings to configure to up... See more...
Hi We have installed website monitoring app and added few URL"s to monitor but data was not updating properly as checked last checked was not latest time. is there any settings to configure to update the data always  and also please suggest any option to add urls bulk or in script bases Note :we are using this App on splunk cloud UI.  
what could be the stanza for monitoring linux directory  /home/cleo/Harmony/script/logs/Harmony_directory_monitor_1hr.conf.20220512.log i tried [monitor:///home/cleo/Harmony/script/logs] with whi... See more...
what could be the stanza for monitoring linux directory  /home/cleo/Harmony/script/logs/Harmony_directory_monitor_1hr.conf.20220512.log i tried [monitor:///home/cleo/Harmony/script/logs] with whitelist =*.log but not able to ingest any data. this path has proper permission. 
Hi all, I am working on calculating the response time (for max, PR99, and avg value) from Table 1. I would like to list the detail procedure duration (Procedure-1/-2/-3) and name the ROW1 as max/... See more...
Hi all, I am working on calculating the response time (for max, PR99, and avg value) from Table 1. I would like to list the detail procedure duration (Procedure-1/-2/-3) and name the ROW1 as max/PR99/AVG, the output would be similar as Table 2. Do anyone have idea about how to implement this to include max response time and the corresponding Procedure time as well, instead of list maximum value in each field?  Moreover, is there any way to include average response time and average Procedure-1/-2/-3 time into the same table as well? Table 1: (in sec) Procedure-1 Procedure-2 Procedure-3 Total Response Time Test-1 111 222 333 666 Test-2 200 100 300 600 Test-3 250 350 150 750 Table 2: (in sec) Total Response Time Procedure-1 Procedure-2 Procedure-3 Max 750 (Test 3) 250 (come from Test 3) 350 (come from Test 3) 150 (come from Test 3) Avg (666+600+750)/3=672 (111+200+250)/3=187 (222+100+350)/3=224 (333+300+150)/3=261   Thank you so much. #table #chart #stats #max  
splunk data: 2022-01-01T02:06:12.182Z 7c3edf29-c081-4cca-ae9b-0f79ef7d1c8d INFO {"InfoLogInformation":{"MethodName":"index.handler","Message":""Processing completed"","LogType":"Info","Error":"2022-0... See more...
splunk data: 2022-01-01T02:06:12.182Z 7c3edf29-c081-4cca-ae9b-0f79ef7d1c8d INFO {"InfoLogInformation":{"MethodName":"index.handler","Message":""Processing completed"","LogType":"Info","Error":"2022-01-01T02:06:12.040Z::400 - {"ResponseStatus":{"ErrorCode":"WorkBookMessageException","Message":"###1234$$$ Invalid."  query: | rex ",\"Message\":\"\"(?<Message>.*?)\"\"" | rex "\"Exception\":\"400 - {\"ResponseStatus\":{\"ErrorCode\":\"(?<ErrorCode>.*?)\",\"Message\":\"(?<Message>.*?)\"" query result: Message = Processing completed   I want the result should the Message":"###1234$$$ Invalid. please help. TIA
I'd like to have a checkbox, which when checked will either show or enable a text field, and when unchecked will hide or disable the text field:     <input type="checkbox" token="reqIdFilter"> <la... See more...
I'd like to have a checkbox, which when checked will either show or enable a text field, and when unchecked will hide or disable the text field:     <input type="checkbox" token="reqIdFilter"> <label></label> <choice value="Enable">Enable</choice> <change> <condition match="$reqIdFilter$==&quot;Enable&quot;"> <set token="showReqIdFilter">Y</set> </condition> <condition> <unset token="showReqIdFilter"></unset> </condition> </change> </input> <input type="text" token="RequestId" depends="$showReqIdFilter$"> <label>RequestId</label> </input>     But that doesn't seem to work. Is there something wrong with the above? Second, I'd like the search to use the value of $RequestId$ only if the checkbox is checked, how can I do that? `mySearch $RequestId$` will always inject $RequestId$, how can i make this conditional on the checkbox?
Hello  I have a quick question. are there any ways we can find a specific index name that was used within which App? The reason I am asking since we have a number of apps, but I forgot which apps I... See more...
Hello  I have a quick question. are there any ways we can find a specific index name that was used within which App? The reason I am asking since we have a number of apps, but I forgot which apps I used for index wincbs? Thank you so much in advance for your support in these efforts.   
Is there any way we can pull which all SAML group names are configured in Splunk or Is there any way we can get which roles are assigned to which SAML group in Splunk