All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi Team, Want to mask two of the fields "password" and "cpassword" from the events which are getting written with the plain text information. So needs to be changed as #####. Sample event informati... See more...
Hi Team, Want to mask two of the fields "password" and "cpassword" from the events which are getting written with the plain text information. So needs to be changed as #####. Sample event information:   [2024-01-31_07:58:28] INFO : REQUEST: User:abc CreateUser POST: name: AB_Test_Max;email: xyz@gmail.com;password: abc12345679;cpassword: abc12345679;role: User; [2024-01-30_14:05:42] INFO : REQUEST: User:xyz CreateUser POST: name: Math_Lab;email: abc@yahoo.com;password: xyzab54;cpassword: xyzab54;role: Admin; So kindly help with the props.conf so that i can apply with SEDCMD-mask.
Hi All,   Just wanted to know we have splunk ES and we use servicenow to triggered alert now my question is if there are few alert which I want to give Priority as P2 how can I do it in splunk as s... See more...
Hi All,   Just wanted to know we have splunk ES and we use servicenow to triggered alert now my question is if there are few alert which I want to give Priority as P2 how can I do it in splunk as splunk by default priority is P3.   
Hello everyone,  i need solution for this. my data : userID=text123 , login_time="2024-03-21 08:04:42.201000", ip_addr=12.3.3.21 userID=text123, login_time="2024-03-21 08:00:00.001000", ip_addr=1... See more...
Hello everyone,  i need solution for this. my data : userID=text123 , login_time="2024-03-21 08:04:42.201000", ip_addr=12.3.3.21 userID=text123, login_time="2024-03-21 08:00:00.001000", ip_addr=12.3.3.45 userID=text123, login_time="2024-03-21 08:02:12.201000", ip_addr=12.3.3.21 userID=text123, login_time="2024-03-21 07:02:42.201000", ip_addr=12.3.3.34   i want get data, userID="text123 " AND in the last 5 minutes AND if mutiple ip i used join,map,append but not solved.please help for SPL this
Hello How to set time range from dropdown in Dashboard Studio? For example:    Drop down:    (Kindergarten, Elementary, Middle School, High School) If select "Kindergarten" ==>   time range  ... See more...
Hello How to set time range from dropdown in Dashboard Studio? For example:    Drop down:    (Kindergarten, Elementary, Middle School, High School) If select "Kindergarten" ==>   time range  =>   Last 24 hour If select "Elementary"    ==>   time range  =>   Last 3 day If select "Middle School" ==>   time range  =>   Last 7 day If select "High School" ==>   time range  =>   Last 30 day Thank you so much
Hi All  Anyone can confirm is there any prebuilt dashboard available for SAP - customer data cloud.  If no pre-built dashboard, then how would i like to pull all the logs from the SAP  system to mo... See more...
Hi All  Anyone can confirm is there any prebuilt dashboard available for SAP - customer data cloud.  If no pre-built dashboard, then how would i like to pull all the logs from the SAP  system to monitor infrastructure metrics in splunk dashboard    Note - Currently i have enabled all the logs are sent via connector, however i cant see end point logs    
As I was going through the Asset and Identity Management manual, I couldn't see anything related to how to enrich the two lookup files assets_by_cidr.csv and assets_by_str.csv. For some reason (I cou... See more...
As I was going through the Asset and Identity Management manual, I couldn't see anything related to how to enrich the two lookup files assets_by_cidr.csv and assets_by_str.csv. For some reason (I couldn't figure out why), the assets_by_str.csv is filled with data and is populating data when running any search. However, nothing is getting fetched to assets_by_cidr.csv, I'm not sure if this is supposed to be filled automatically? and I can't find any configuration that associates where these two CSVs are taking the data from...    I can only see that they're coming from the app SA-IdentityManagement, can someone please help in troubleshooting this? Where are these two lookup table expected to get the data from and how? Lastly, to give more context, the final purpose it to fulfill the request of data enrichment for this specific use case Detect Large Outbound ICMP Packets...
I have a Splunk instance that is deployed on EBS Volume mounted to EC2 Instance. I started working on enabling Smart Store for one of my indexes but whenever I have the indexes.conf configured to le... See more...
I have a Splunk instance that is deployed on EBS Volume mounted to EC2 Instance. I started working on enabling Smart Store for one of my indexes but whenever I have the indexes.conf configured to let one of my indexes use the smart store, when I restart splunk it basically hangs on this step: Checking prerequisites... Checking http port [8000]: open Checking mgmt port [8089]: open Checking appserver port [127.0.0.1:8065]: open Checking kvstore port [8191]: open Checking configuration... Done. Checking critical directories... Done Checking indexes...  Nothing found in logs, I am just puzzled how to fix this. Can anybody hint what can be the issue? indexes.conf: [volume:s3volumeone] storageType = remote path = s3://some-bucket-name remote.s3.endpoint = https://s3.us-west-2.amazonaws.com [smart_store_index_10] remotePath = volume:s3volumeone/$_index_name homePath = $SPLUNK_DB/$_index_name/db coldPath = $SPLUNK_DB/$_index_name/colddb thawedPath = $SPLUNK_DB/$_index_name/thaweddb maxGlobalDataSizeMB = 0 maxGlobalRawDataSizeMB = 0 homePath.maxDataSizeMB = 1000 maxHotBuckets = 2 maxDataSize = 3 maxWarmDBCount = 5 frozenTimePeriodInSecs = 10800 small numbers for bucket size etc. are intentional to allow quick testing of settings.
At the beginning two examples : the first one: index=s1 | timechart sum(kmethod) avg(kduration)   generates two series chart second one uses 'count by': index=s1 | timechart count by kmet... See more...
At the beginning two examples : the first one: index=s1 | timechart sum(kmethod) avg(kduration)   generates two series chart second one uses 'count by': index=s1 | timechart count by kmethod   generates just one series .   I would like to join both timecharts and kind of merge "count by" with simple "avg" or "sum" so  : -first one 'stacked bar' from second example -second one 'line' from second series of the first example   Any hints ?   K.  
I have the following query that gives me week-over-week comparisons for the past month:   index="myIndex" earliest=-1mon "my query" | timechart count as Visits | timewrap w   I have added dropdow... See more...
I have the following query that gives me week-over-week comparisons for the past month:   index="myIndex" earliest=-1mon "my query" | timechart count as Visits | timewrap w   I have added dropdowns to my dashboard to filter this data by a user-selected time window for every day in the one month range.  The four dropdowns correspond to the start hour, start minute, end hour, and end minute of the time window in military time.  For example, to filter the data by 6:30 AM - 1:21 PM each day, the tokens would have the following values:   $start_hour_token$: '6' $start_minute_token$: '30' $end_hour_token$: '13' $end_minute_token$: '21'   How would I modify the original query to make ths work? Thanks! Jonathan
Is there a way to create a query to show the errors from splunk TA and kv store 
I have 4 panels in a single row due to this the texts in the y axis are compressed and shows as ser...e_0 instead of server_xyz_0. So when I mouse-over on ser...e_0, I want the tooltip to show the fu... See more...
I have 4 panels in a single row due to this the texts in the y axis are compressed and shows as ser...e_0 instead of server_xyz_0. So when I mouse-over on ser...e_0, I want the tooltip to show the full name  server_xyz_0. Please help. Thanks
Hi, Is it possible to create custom permission structure within Splunk Apps and integrate in Splunk User roles? Use case : Im developing a custom Splunk App using Splunk UI Toolkit. I need to rest... See more...
Hi, Is it possible to create custom permission structure within Splunk Apps and integrate in Splunk User roles? Use case : Im developing a custom Splunk App using Splunk UI Toolkit. I need to restrict some features within the app based on Splunk custom user role they have been assigned to.  Thank you. 
Currently, I have a table that looks like this: Table1 Hostname   Vendor         Product              Version ----------------------------------------------------------------- hostname1  vendor1 ... See more...
Currently, I have a table that looks like this: Table1 Hostname   Vendor         Product              Version ----------------------------------------------------------------- hostname1  vendor1      product1             version1                           vendor2      product2             version2                           vendor3      product3             version3                           vendor4      product4             version4 ----------------------------------------------------------------- hostname2 vendor1      product2             version2                          vendor2      product4             version1                          vendor3      product3             version5                          vendor4      product6             version3 ----------------------------------------------------------------- In this scenario, each hostname has a list of vendors, products and versions attached to it. What I want to create is the following: Hostname      Vendor      Product        Version hostname1    vendor1   product1      version1 hostname1    vendor2   product2      version2 hostname1    vendor3   product3      version3 hostname1    vendor4   product4      version4 hostname2    vendor1   product2      version2 hostname2    vendor2   product4      version1 hostname2    vendor3   product3      version5 hostname2    vendor4   product6      version3   Does anyone have any ideas?
on my search     index=raw_fe5_autsust Aplicacao=HUB Endpoint="*/" | eval RefUser=if(Mes!="", Mes, substr("0" + tostring((tonumber(strftime(_time, "%m"))-1)), -2) + "-" + strftime(_time, "%Y")) | ... See more...
on my search     index=raw_fe5_autsust Aplicacao=HUB Endpoint="*/" | eval RefUser=if(Mes!="", Mes, substr("0" + tostring((tonumber(strftime(_time, "%m"))-1)), -2) + "-" + strftime(_time, "%Y")) | eval RefUser = strptime(RefUser,"%Y/%m") | eval RefAtual = relative_time(-time, "-1mon")     I need to get the difference between RefUser and RefAtual in months and count by this diff
Hi, Is it possible to display only weekdays in Time chart ?  PS: I am not looking to discard the data for weekend. Just in Column chart it should not display the weekend dates on weekend its... See more...
Hi, Is it possible to display only weekdays in Time chart ?  PS: I am not looking to discard the data for weekend. Just in Column chart it should not display the weekend dates on weekend its always 0 so hoping to exclude from chart  
Good morning,  I am having issues with admon and running into this error:  Streamed Search Execute Failed Because: Error in 'lookup' command: Script execution failed for external search command '/o... See more...
Good morning,  I am having issues with admon and running into this error:  Streamed Search Execute Failed Because: Error in 'lookup' command: Script execution failed for external search command '/opt/splunk/var/run/searchpeers/B3E####/apps/Splunk_TA_Windows/bin/user_account_control_property.py'.. Transforms on indexer  #########Active Directory ########## [user_account_control_property] external_cmd = user_account_control_property.py userAccountControl userAccountPropertyFlad external_type = python field_list = userAccountControl, userAccountPropertyFlag python.version = python3    Script is located within the bin directory of the App .../bin/user_account_control_property The error is happening when I run this search      index=test source=ActiveDirectory I have an app created called ADMON on the deployment server which is being deployed to my primary domain controllers. At first, I saw a ton of sync data, after that it was erroring out with the above error message.  
Hello Splunker!! I have a scenerio in which there is discrepency in Scheduled search results and index search results. scheduled search is using summary index. While in index search we using di... See more...
Hello Splunker!! I have a scenerio in which there is discrepency in Scheduled search results and index search results. scheduled search is using summary index. While in index search we using direct index. The results coming from the index search is correct while results comes from the scheduled search is wrong. Please help me to rectify the know workarounds on this and the consequences.  
I am attempting to integrate a third-party application with an existing log4j implementation into Splunk.  I have what I beleive should be a working appender configuration in my log4j.properties file... See more...
I am attempting to integrate a third-party application with an existing log4j implementation into Splunk.  I have what I beleive should be a working appender configuration in my log4j.properties file.  However, when my Tomcat server starts I receive the below index out of bounds error.  I am using logging library version 1.9.0. I'm looking for advice on where to look in order to resolve this.  I have included the appender config for reference. APPENDER CONFIG: appender.splunkHEC=com.splunk.logging.HttpEventCollectorLog4jAppender appender.splunkHEC.name=splunkHEC appender.splunkHEC.layout=org.apache.log4j.PatternLayout appender.splunkHEC.layout.ConversionPattern=%d{ISO8601} [%t] %p %c %x - %m%n appender.splunkHEC.url=<redacted> appender.splunkHEC.token=<redacted> appender.splunkHEC.index=ioeng appender.splunkHEC.source=IIQ_Tomcat appender.splunkHEC.sourceType=log4j appender.splunkHEC.batch_size_count=100 appender.splunkHEC.disableCertificateValidation=true RELEVANT JAVA STACK: Caused by: java.lang.StringIndexOutOfBoundsException: begin 0, end -1, length 9 at java.base/java.lang.String.checkBoundsBeginEnd(String.java:3319) at java.base/java.lang.String.substring(String.java:1874) at org.apache.logging.log4j.util.PropertiesUtil.partitionOnCommonPrefixes(PropertiesUtil.java:555) at org.apache.logging.log4j.core.config.properties.PropertiesConfigurationBuilder.build(PropertiesConfigurationBuilder.java:156) at org.apache.logging.log4j.core.config.properties.PropertiesConfigurationFactory.getConfiguration(PropertiesConfigurationFactory.java:56) at org.apache.logging.log4j.core.config.properties.PropertiesConfigurationFactory.getConfiguration(PropertiesConfigurationFactory.java:35) at org.apache.logging.log4j.core.config.ConfigurationFactory$Factory.getConfiguration(ConfigurationFactory.java:557) at org.apache.logging.log4j.core.config.ConfigurationFactory$Factory.getConfiguration(ConfigurationFactory.java:481) at org.apache.logging.log4j.core.config.ConfigurationFactory.getConfiguration(ConfigurationFactory.java:323) at org.apache.logging.log4j.core.LoggerContext.reconfigure(LoggerContext.java:695) at org.apache.logging.log4j.core.LoggerContext.reconfigure(LoggerContext.java:716) at org.apache.logging.log4j.core.LoggerContext.start(LoggerContext.java:270) at org.apache.logging.log4j.core.impl.Log4jContextFactory.getContext(Log4jContextFactory.java:155) at org.apache.logging.log4j.core.impl.Log4jContextFactory.getContext(Log4jContextFactory.java:47) at org.apache.logging.log4j.LogManager.getContext(LogManager.java:196) at org.apache.logging.log4j.spi.AbstractLoggerAdapter.getContext(AbstractLoggerAdapter.java:137) at org.apache.logging.log4j.jcl.LogAdapter.getContext(LogAdapter.java:40) at org.apache.logging.log4j.spi.AbstractLoggerAdapter.getLogger(AbstractLoggerAdapter.java:47) at org.apache.logging.log4j.jcl.LogFactoryImpl.getInstance(LogFactoryImpl.java:40) at org.apache.logging.log4j.jcl.LogFactoryImpl.getInstance(LogFactoryImpl.java:55) at org.apache.commons.logging.LogFactory.getLog(LogFactory.java:655) at sailpoint.web.StartupContextListener.<clinit>(StartupContextListener.java:59) SERVER DETAILS: 20-Mar-2024 11:52:03.882 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Server version name: Apache Tomcat/9.0.64 20-Mar-2024 11:52:03.883 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Server built: Jun 2 2022 19:08:46 UTC 20-Mar-2024 11:52:03.884 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Server version number: 9.0.64.0 20-Mar-2024 11:52:03.884 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log OS Name: Linux 20-Mar-2024 11:52:03.885 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log OS Version: 3.10.0-1160.108.1.el7.x86_64 20-Mar-2024 11:52:03.886 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Architecture: amd64 20-Mar-2024 11:52:03.886 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Java Home: /usr/java/jdk-11.0.22 20-Mar-2024 11:52:03.887 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log JVM Version: 11.0.22+9-LTS-219 20-Mar-2024 11:52:03.887 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log JVM Vendor: Oracle Corporation
Hello there,  We are looking to use the Custom option to send vpc flow log data to Splunk Cloud. Previously we were using the default set of fields. There's a need to ingest additional fields withou... See more...
Hello there,  We are looking to use the Custom option to send vpc flow log data to Splunk Cloud. Previously we were using the default set of fields. There's a need to ingest additional fields without using the "all fields" option in order to save on data ingest.  The issue appears to be with the Regex where the add-on cannot mix and match field names, rather it needs to be in a particular order otherwise the data is not parsed properly. Default Format: ${version} ${account-id} ${interface-id} ${srcaddr} ${dstaddr} ${srcport} ${dstport} ${protocol} ${packets} ${bytes} ${start} ${end} ${action} ${log-status} Custom Format ${version} ${account-id} ${vpc-id} ${subnet-id} ${interface-id} ${instance-id} ${flow-direction} ${srcaddr} ${dstaddr} ${srcport} ${dstport} ${pkt-srcaddr} ${pkt-dstaddr} ${protocol} ${packets} ${bytes} ${start} ${end} ${action} ${log-status} This is preventing us from being able to include additional fields that can be useful to our team, without ingesting everything. Has anyone else encountered this problem before? 
We are streaming Dynatrace metric data into Splunk, for some reason we are seeing duplicate 'MessageDeduplicationId'. So trying to avoid the duplicate entries using dedup command. But not retrieving ... See more...
We are streaming Dynatrace metric data into Splunk, for some reason we are seeing duplicate 'MessageDeduplicationId'. So trying to avoid the duplicate entries using dedup command. But not retrieving any results after using dedup command. Here is my initial query and getting results for this with duplicates- | mstats sum(calc:service.thaa_stress_requests_count_lr_tags) As "Count" ,avg(calc:service.thaa_stress_requests_lr_tags) As "Response" where index=itsi_im_metrics AND source.name="DT_NonProd_SaaS" by Dimension.id | eval Response=round((Response/1000000),2), Count=round(Count,0) | search Dimension.id IN ("*Process.aspx") After adding dedup to avoid duplicate 'MessageDeduplicationId' , no results | mstats sum(calc:service.thaa_stress_requests_count_lr_tags) As "Count" ,avg(calc:service.thaa_stress_requests_lr_tags) As "Response" where index=itsi_im_metrics AND source.name="DT_NonProd_SaaS" by Dimension.id | eval Response=round((Response/1000000),2), Count=round(Count,0) | search Dimension.id IN ("*Process.aspx") | dedup MessageDeduplicationId sample payload: Dimension.id: xxxProcess.aspx Dimension.name: Literal Not Found MessageDeduplicationId: a901b712889217fc194cd0446a70325e aggregation: avg entity.service.id: xxx entity.service.name:xxxx metric_name:calc:service.thaa_stress_requests_lr_tags: 1613759 resolution: 1m source.name: xxxx unit: MicroSecond