All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Check out the filldown command.
Leaving it as is. SplunkForwarder folder and contents within are owned by root and wheel Applications Folder is owned by root and admin
Currently, I have a table that looks like this: Table1 Hostname   Vendor         Product              Version ----------------------------------------------------------------- hostname1  vendor1 ... See more...
Currently, I have a table that looks like this: Table1 Hostname   Vendor         Product              Version ----------------------------------------------------------------- hostname1  vendor1      product1             version1                           vendor2      product2             version2                           vendor3      product3             version3                           vendor4      product4             version4 ----------------------------------------------------------------- hostname2 vendor1      product2             version2                          vendor2      product4             version1                          vendor3      product3             version5                          vendor4      product6             version3 ----------------------------------------------------------------- In this scenario, each hostname has a list of vendors, products and versions attached to it. What I want to create is the following: Hostname      Vendor      Product        Version hostname1    vendor1   product1      version1 hostname1    vendor2   product2      version2 hostname1    vendor3   product3      version3 hostname1    vendor4   product4      version4 hostname2    vendor1   product2      version2 hostname2    vendor2   product4      version1 hostname2    vendor3   product3      version5 hostname2    vendor4   product6      version3   Does anyone have any ideas?
Interesting! Thanks for this; I'll review and give this a try. One question:  Are you creating a Splunk user and changing permissions recursively to splunk:splunk, or are you just leaving it as-is? ... See more...
Interesting! Thanks for this; I'll review and give this a try. One question:  Are you creating a Splunk user and changing permissions recursively to splunk:splunk, or are you just leaving it as-is? (To this point, we've been doing the latter, but I'm wondering if creating a dedicated user might be preferable?)
on my search     index=raw_fe5_autsust Aplicacao=HUB Endpoint="*/" | eval RefUser=if(Mes!="", Mes, substr("0" + tostring((tonumber(strftime(_time, "%m"))-1)), -2) + "-" + strftime(_time, "%Y")) | ... See more...
on my search     index=raw_fe5_autsust Aplicacao=HUB Endpoint="*/" | eval RefUser=if(Mes!="", Mes, substr("0" + tostring((tonumber(strftime(_time, "%m"))-1)), -2) + "-" + strftime(_time, "%Y")) | eval RefUser = strptime(RefUser,"%Y/%m") | eval RefAtual = relative_time(-time, "-1mon")     I need to get the difference between RefUser and RefAtual in months and count by this diff
Although, I do notice that notifications are still enabled. I created a config profile that mutes Critical Alerts and Notifications for Bundle ID: aplt Tested it once, seemed to work, but I'd like t... See more...
Although, I do notice that notifications are still enabled. I created a config profile that mutes Critical Alerts and Notifications for Bundle ID: aplt Tested it once, seemed to work, but I'd like to test again on a fresh machine to verify.
So, I was able to get it to silently deploy and it seems to be working as intended  I built the package using Composer, making sure to set the proper R-W-X, Owner, and Group permissions for /Applica... See more...
So, I was able to get it to silently deploy and it seems to be working as intended  I built the package using Composer, making sure to set the proper R-W-X, Owner, and Group permissions for /Applications/SplunkForwarder Then added the deploymentclient.conf file within the /Applications/SplunkForwarder/etc/system/local directory before building the package. Then for my policy I added that package, and for the silent install I added a script which contains: #!/bin/sh #Accept Splunk Licenses /Applications/SplunkForwarder/bin/splunk start --accept-license --auto-ports --no-prompt --answer-yes # Enable boot start /Applications/SplunkForwarder/bin/splunk enable boot-start #Hide the folder chflags hidden /Applications/SplunkForwarder
hi, Did u find the secure solution ?? Regards
Yes, I am. Previously, with an older version, we just used Jamf Composer watch the file system, then did the manual .pkg installed (user interaction and all), put in our settings files, then had Comp... See more...
Yes, I am. Previously, with an older version, we just used Jamf Composer watch the file system, then did the manual .pkg installed (user interaction and all), put in our settings files, then had Composer create the package. I really don't want to keep having to do that kind of sloppy install, but it's beginning to look like we may have to.
The _time field is given special treatment in most if not all charts such that a complete timeline axis is used. If you want to exclude sections, replace the _time field with another, you may want to... See more...
The _time field is given special treatment in most if not all charts such that a complete timeline axis is used. If you want to exclude sections, replace the _time field with another, you may want to format the time with strftime() otherwise the x-axis will just be the time in epoch format (seconds since start of 1970)
Hi dkv21210, Are you using JAMF as your MDM?
Hi, Is it possible to display only weekdays in Time chart ?  PS: I am not looking to discard the data for weekend. Just in Column chart it should not display the weekend dates on weekend its... See more...
Hi, Is it possible to display only weekdays in Time chart ?  PS: I am not looking to discard the data for weekend. Just in Column chart it should not display the weekend dates on weekend its always 0 so hoping to exclude from chart  
I'd love to know as well. I've been banging my head against a wall with this, off and on, for a couple of months now. It's insane to me how impossible it is to find any solutions online (never mind t... See more...
I'd love to know as well. I've been banging my head against a wall with this, off and on, for a couple of months now. It's insane to me how impossible it is to find any solutions online (never mind this forum), and Splunk clearly doesn't care to address it. How exactly are we to do quiet deployments of UF to a fleet of Macs managed by MDM? As it stands, the DMG is out (too much user interaction required, which apparently can't be suppressed), and the .tgz also requires a combination of scripting, permissions changes, possibly creation of a new user, setting environment variables, and moving config files into place. Can I do this myself? Sure, but why should I have to? Even with the leverage of $GIGANTIC_FEDERAL_AGENCY, Splunk doesn't care to help us. Godspeed to us all, I guess.
Thank you both. Eventstats worked perfectly and removed the process of adding IPs to a NOT list. 
Good morning,  I am having issues with admon and running into this error:  Streamed Search Execute Failed Because: Error in 'lookup' command: Script execution failed for external search command '/o... See more...
Good morning,  I am having issues with admon and running into this error:  Streamed Search Execute Failed Because: Error in 'lookup' command: Script execution failed for external search command '/opt/splunk/var/run/searchpeers/B3E####/apps/Splunk_TA_Windows/bin/user_account_control_property.py'.. Transforms on indexer  #########Active Directory ########## [user_account_control_property] external_cmd = user_account_control_property.py userAccountControl userAccountPropertyFlad external_type = python field_list = userAccountControl, userAccountPropertyFlag python.version = python3    Script is located within the bin directory of the App .../bin/user_account_control_property The error is happening when I run this search      index=test source=ActiveDirectory I have an app created called ADMON on the deployment server which is being deployed to my primary domain controllers. At first, I saw a ton of sync data, after that it was erroring out with the above error message.  
A quick and ugly hack would be to run your original search (that one calculating availability) and then do | append [ | inputlookup Component_avail.csv     | eval Availability=100 ] | stats min(Av... See more...
A quick and ugly hack would be to run your original search (that one calculating availability) and then do | append [ | inputlookup Component_avail.csv     | eval Availability=100 ] | stats min(Availability) as Availability by Component Alternatively, you can define your lookup to contain both fields - Component and Availability (with Availability set to 100 across the board) and use | inputlookup append=t Component_avail.csv and then do the stats. This way if your original search calculates some non-100% availability you'll get it in your final results. If there is no sub-100 availability calculated, you'll get the static 100 provided by the lookup.
Hello Splunker!! I have a scenerio in which there is discrepency in Scheduled search results and index search results. scheduled search is using summary index. While in index search we using di... See more...
Hello Splunker!! I have a scenerio in which there is discrepency in Scheduled search results and index search results. scheduled search is using summary index. While in index search we using direct index. The results coming from the index search is correct while results comes from the scheduled search is wrong. Please help me to rectify the know workarounds on this and the consequences.  
I am not sure which "lookup" you are referring to by "here" - in both the example I gave and @bowesmana gave, we are using inputlookup in a subsearch, which will retrieve all entries from the lookup ... See more...
I am not sure which "lookup" you are referring to by "here" - in both the example I gave and @bowesmana gave, we are using inputlookup in a subsearch, which will retrieve all entries from the lookup store, and since they are being appended to the pipeline of event retrieved by the main search, the events from the main search still exist. By using the stats command, as we have shown, you can effectively combine these two sets of events. In my solution, by setting the flag field to different values in the two different sets, it is possible to determine whether the events with common time and value have come from one or other or both sets of events. This is you can determine which values (from the lookup) are missing from the main search within in each timeframe.
I am attempting to integrate a third-party application with an existing log4j implementation into Splunk.  I have what I beleive should be a working appender configuration in my log4j.properties file... See more...
I am attempting to integrate a third-party application with an existing log4j implementation into Splunk.  I have what I beleive should be a working appender configuration in my log4j.properties file.  However, when my Tomcat server starts I receive the below index out of bounds error.  I am using logging library version 1.9.0. I'm looking for advice on where to look in order to resolve this.  I have included the appender config for reference. APPENDER CONFIG: appender.splunkHEC=com.splunk.logging.HttpEventCollectorLog4jAppender appender.splunkHEC.name=splunkHEC appender.splunkHEC.layout=org.apache.log4j.PatternLayout appender.splunkHEC.layout.ConversionPattern=%d{ISO8601} [%t] %p %c %x - %m%n appender.splunkHEC.url=<redacted> appender.splunkHEC.token=<redacted> appender.splunkHEC.index=ioeng appender.splunkHEC.source=IIQ_Tomcat appender.splunkHEC.sourceType=log4j appender.splunkHEC.batch_size_count=100 appender.splunkHEC.disableCertificateValidation=true RELEVANT JAVA STACK: Caused by: java.lang.StringIndexOutOfBoundsException: begin 0, end -1, length 9 at java.base/java.lang.String.checkBoundsBeginEnd(String.java:3319) at java.base/java.lang.String.substring(String.java:1874) at org.apache.logging.log4j.util.PropertiesUtil.partitionOnCommonPrefixes(PropertiesUtil.java:555) at org.apache.logging.log4j.core.config.properties.PropertiesConfigurationBuilder.build(PropertiesConfigurationBuilder.java:156) at org.apache.logging.log4j.core.config.properties.PropertiesConfigurationFactory.getConfiguration(PropertiesConfigurationFactory.java:56) at org.apache.logging.log4j.core.config.properties.PropertiesConfigurationFactory.getConfiguration(PropertiesConfigurationFactory.java:35) at org.apache.logging.log4j.core.config.ConfigurationFactory$Factory.getConfiguration(ConfigurationFactory.java:557) at org.apache.logging.log4j.core.config.ConfigurationFactory$Factory.getConfiguration(ConfigurationFactory.java:481) at org.apache.logging.log4j.core.config.ConfigurationFactory.getConfiguration(ConfigurationFactory.java:323) at org.apache.logging.log4j.core.LoggerContext.reconfigure(LoggerContext.java:695) at org.apache.logging.log4j.core.LoggerContext.reconfigure(LoggerContext.java:716) at org.apache.logging.log4j.core.LoggerContext.start(LoggerContext.java:270) at org.apache.logging.log4j.core.impl.Log4jContextFactory.getContext(Log4jContextFactory.java:155) at org.apache.logging.log4j.core.impl.Log4jContextFactory.getContext(Log4jContextFactory.java:47) at org.apache.logging.log4j.LogManager.getContext(LogManager.java:196) at org.apache.logging.log4j.spi.AbstractLoggerAdapter.getContext(AbstractLoggerAdapter.java:137) at org.apache.logging.log4j.jcl.LogAdapter.getContext(LogAdapter.java:40) at org.apache.logging.log4j.spi.AbstractLoggerAdapter.getLogger(AbstractLoggerAdapter.java:47) at org.apache.logging.log4j.jcl.LogFactoryImpl.getInstance(LogFactoryImpl.java:40) at org.apache.logging.log4j.jcl.LogFactoryImpl.getInstance(LogFactoryImpl.java:55) at org.apache.commons.logging.LogFactory.getLog(LogFactory.java:655) at sailpoint.web.StartupContextListener.<clinit>(StartupContextListener.java:59) SERVER DETAILS: 20-Mar-2024 11:52:03.882 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Server version name: Apache Tomcat/9.0.64 20-Mar-2024 11:52:03.883 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Server built: Jun 2 2022 19:08:46 UTC 20-Mar-2024 11:52:03.884 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Server version number: 9.0.64.0 20-Mar-2024 11:52:03.884 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log OS Name: Linux 20-Mar-2024 11:52:03.885 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log OS Version: 3.10.0-1160.108.1.el7.x86_64 20-Mar-2024 11:52:03.886 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Architecture: amd64 20-Mar-2024 11:52:03.886 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Java Home: /usr/java/jdk-11.0.22 20-Mar-2024 11:52:03.887 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log JVM Version: 11.0.22+9-LTS-219 20-Mar-2024 11:52:03.887 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log JVM Vendor: Oracle Corporation
Were you able to find any resolution to this?