All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello, We are moving from single deployment to clustered environment.  Current scenario: for one of my dashboards i was getting the lookup file created by running a python script. using a cronjob. ... See more...
Hello, We are moving from single deployment to clustered environment.  Current scenario: for one of my dashboards i was getting the lookup file created by running a python script. using a cronjob. Since i dont want it to be indexed, i was just creating the file and placing it in the lookups folder of one of the apps where the dashboard is there.  Now when i move to clustered environment how and where do i place the script to generate the lookup  and where can i save the lookup file to automatically get shared in all the searh heads.  thanks  
I am just getting started with splunk. I want to get the splunk docker container and run splunk under the free license .   Docker hub has splunk enterprise free, which I understand is a 60 day licen... See more...
I am just getting started with splunk. I want to get the splunk docker container and run splunk under the free license .   Docker hub has splunk enterprise free, which I understand is a 60 day license.  Is there a container with the free license or can I convert the enterprise license to a free license?
Hi, I need help in creating dashboard. I have Add Step and Delete Step button When i click on Add Step -> 1 dropdown input should come again i click on Add Step -> 2nd dropdown input should come.... See more...
Hi, I need help in creating dashboard. I have Add Step and Delete Step button When i click on Add Step -> 1 dropdown input should come again i click on Add Step -> 2nd dropdown input should come.  In both the dropdowns the values will be same Start->download->run->restart->error But need to add as different steps. and when i click on Delete need to delete the latest step. How do i bring this?
Hello SPlunkers, 'real-time' alerts are using up the maximum resources and resulting in skipped searches. does using a cron schedule which runs every 1 minute and the time range of last 61 seconds a... See more...
Hello SPlunkers, 'real-time' alerts are using up the maximum resources and resulting in skipped searches. does using a cron schedule which runs every 1 minute and the time range of last 61 seconds also cause stress to the system. What would be the ideal time range to use ?    
Below I have a setup which is submitting an event to splunk. I would like to not recreate a connection and stream several events at a time. What can I do to achieve this? I attempted to use a socket ... See more...
Below I have a setup which is submitting an event to splunk. I would like to not recreate a connection and stream several events at a time. What can I do to achieve this? I attempted to use a socket below, but it always errors out with `<msg type="ERROR">Read Timeout</msg>`. public class Main { static final ObjectMapper mapper = new ObjectMapper(); public static void main(String[] args) throws Exception { Map<String,Object> map = new HashMap<>(); { map.put("host", System.getenv("SPLUNKHOST")); map.put("username", System.getenv("USER")); map.put("password", System.getenv("PASSWORD")); } Service.setSslSecurityProtocol(SSLSecurityProtocol.TLSv1_2); final Service srv = Service.connect(map); working(srv); broken(srv); } static void working(Service srv) throws Exception { final Receiver receiver = srv.getReceiver(); String str = mapper.writeValueAsString(data("rec_http")); receiver.log("api_analytics", str); } static void broken(Service srv) throws Exception { final Index index = srv.getIndexes().get("api_analytics"); final Socket sock = index.attach(); String str = mapper.writeValueAsString(data("sock")); final OutputStream out = sock.getOutputStream(); final InputStream in = sock.getInputStream(); out.write(str.getBytes()); String r = new String(in.readAllBytes()); System.out.println(r); sock.close(); } static Object data(String src) { Map<Object,Object> map = new HashMap<>(); map.put("index","api_analytics"); map.put("source",src); return map; } }
I'm trying to create a query to show me all users who have purchased more than 1 type of product. Each event has a "user" field and a "product" field.  I only want to see the users that have purchas... See more...
I'm trying to create a query to show me all users who have purchased more than 1 type of product. Each event has a "user" field and a "product" field.  I only want to see the users that have purchased more than 1 type of product. "| stats count by user product"  This shows me all user and product combinations, but don't know how to filter all events where a user only purchased one type of product. I feel that it should be a very simple query, but can't seem to figure it out.
Hi,  In Splunk's internal log file I can see the log file was processed by Splunk to index, but when I am trying to search the same index from SH, I am not able to find the events. This is happening... See more...
Hi,  In Splunk's internal log file I can see the log file was processed by Splunk to index, but when I am trying to search the same index from SH, I am not able to find the events. This is happening intermittently on a few of the log files.  Log from  splunkd HF:  INFO TailReader - Batch input finished reading file='/xx/log/xxxxxxx/processed/archive/processed.log_2021-01-20T12:45:01.log' No results from below search with all-time  index=* source="/xxx/log/xxxxxxx/processed/archive/processed.log_2021-01-20T12:45:01.log"
Per Documentation and when using Splunk Upgrade checker, there is a requirement to move to python3 after 8.0. The Bitbucket Add-On is failing the checklist stating Add-On would not work after the upg... See more...
Per Documentation and when using Splunk Upgrade checker, there is a requirement to move to python3 after 8.0. The Bitbucket Add-On is failing the checklist stating Add-On would not work after the upgrade. Is there any plans on updating this to python3 @twesty     
I am trying to write a Report which queries our Windows Security Event logs for event # 4738, "user account was changed." There is a field, MSADChangedAttribute, which looks like this: SAM Account N... See more...
I am trying to write a Report which queries our Windows Security Event logs for event # 4738, "user account was changed." There is a field, MSADChangedAttribute, which looks like this: SAM Account Name: - Display Name: - User Principal Name: - Home Directory: - Home Drive: - Script Path: - Profile Path: - User Workstations: - Password Last Set: 1/26/2021 2:31:01 AM Account Expires: - Primary Group ID: - AllowedToDelegateTo: - Old UAC Value: - New UAC Value: - User Account Control: - User Parameters: - SID History: - Logon Hours: - I want to make the Report more condensed and human-readable by extracting only the lines in that field which do not include "-". I have successfully identified the regex command which does this but I can't figure-out how to write it as a rex extract? For instance, the following code works on regex101.com to extract a new  'output' field (?<output>^[^-]*$) but when I put that into rex it has no result | rex field=MSADChangedAttribute max_match=0 "(?<Changed>^[^-]*$)" (NOTE: I added 'max_match=0' because sometimes there are more than 1 lines with new changes)
Hi Splunkers,   Does anyone installed Splunk for Solarwinds app? If yes, What are the advantages of getting Solarwinds data into Splunk? Also, Can I use those apps on cluster based environment? Th... See more...
Hi Splunkers,   Does anyone installed Splunk for Solarwinds app? If yes, What are the advantages of getting Solarwinds data into Splunk? Also, Can I use those apps on cluster based environment? Thanks in Advance.
Hi Splunkers! This is a scenario that I came across recently, Could anyone provide me an answer. Scenario: You are upgrading Splunk Core, ES and ITSI in a hybrid environment. The on-prem portio... See more...
Hi Splunkers! This is a scenario that I came across recently, Could anyone provide me an answer. Scenario: You are upgrading Splunk Core, ES and ITSI in a hybrid environment. The on-prem portion is mostly focused towards ingestion (HFs, deployment, intermediaries) while the core Splunk application based on AWS. On-Prem and AWS use different technology stacks. 1) What is your approach to upgrading CORE, ES and ITSI with minimal interruption to customer experience? 2) What order would you upgrade Core, ES and ITSI?  Also, What order would you upgrade Splunk Components (UFs, HFs, deployment, deployers, indexers, cm, lm, mc, etc..?
I’m looking for a solution to report and chart the amount of disk space in use on a user defined set of Windows hosts. Right now I’m able to get both free space and % free space (through the metrics ... See more...
I’m looking for a solution to report and chart the amount of disk space in use on a user defined set of Windows hosts. Right now I’m able to get both free space and % free space (through the metrics "LogicalDisk.Free_Megabytes" and "LogicalDisk.%_Free_Space") but the disk used is not reported. Is there a way to retrieve / derive the information based on calculations like 1) GB_free = (MB free value/1024 2) GB_total = ((GB_free*100)/(percentage free value)) and then 3) GB_used = GB_total-GB_free. Since the disk sizes vary between hosts, I figure using these performance values direct saves me from keeping a separate dataset of drive sizes, and the related need to keep that updated. Ultimately the reporting could be a report by time and by host, or a timechart showing disk usage for multiple hosts.
Hello, I am getting "Error in 'dbxquery' command: External search command exited unexpectedly with non-zero error code 1."  for every search in Splunk DB connect app.  Already configured inputs are i... See more...
Hello, I am getting "Error in 'dbxquery' command: External search command exited unexpectedly with non-zero error code 1."  for every search in Splunk DB connect app.  Already configured inputs are indexed, but when I try to run any searches by hand, I always get this failure. I can not also add any new input. I am using Splunk DB connect 3.4.2 with MySQL database.  The data is indexed so I am sure the connection is working, I can also select 
I created a simple search: index=index1 sourcetype="Perfmon:Free Disk Space" instance="D:\drive\drive_01" | timechart span=1d max(Value) It shows me the amount of available free space over a cer... See more...
I created a simple search: index=index1 sourcetype="Perfmon:Free Disk Space" instance="D:\drive\drive_01" | timechart span=1d max(Value) It shows me the amount of available free space over a certain period. Very useful to determine how fast a drive is filling up. The disadvantage is you have to find the time period in which the drive went from 100% free space to 10% free space (the system leaves some space on the drive before switching to the next drive, so it will never get to 0%). I would like to create a dashboard that will show a graph of the drive and a pull down menu, based on a lookup file (or similar) that allows my Splunk users to look at a drive and see the rate of disk usage over time. I am not an advanced Splunk user, hence posting this question. I can see the process: - Select drive from pulldown list - For selected drive, find point in time where % free space is 98% - For selected drive, find point in time where % free space is 10% or current date/time (if not yet at 10%) - Display line chart graph for the period found Not sure something like that would be possible, but a question not asked is an answer missed Thank you!
Hello guys, tried to update server.conf but Splunk crashed with handshake failure accessing https://localhost:8089 [sslConfig] #sslPassword = $7$OXZyp5GzoeMoXOIUSMqIFC+4Od7JKUacyjpUPBRobqwXbKYgAoO... See more...
Hello guys, tried to update server.conf but Splunk crashed with handshake failure accessing https://localhost:8089 [sslConfig] #sslPassword = $7$OXZyp5GzoeMoXOIUSMqIFC+4Od7JKUacyjpUPBRobqwXbKYgAoObNg== serverCert = $SPLUNK_HOME/etc/apps/APP_OUTPUTS/default/preproduction-server.pem sslPassword = xxx sslRootCAPath = $SPLUNK_HOME/etc/apps/APP_OUTPUTS/default/preproduction-cacert.pem requireClientCert = true Is it necessary to also update web.conf according to https://docs.splunk.com/Documentation/Splunk/7.3.4/Security/Securingyourdeploymentserverandclients? May it break the deployment server / DS clients?   Also does it impact implementation of [tcp-ssl] port? Thanks.
We have the Splunk app for Kubernetes installed.  We are seeing container logs.  The problem is with the metrics.  I see container names and metric names, but I don't see the actual value for these m... See more...
We have the Splunk app for Kubernetes installed.  We are seeing container logs.  The problem is with the metrics.  I see container names and metric names, but I don't see the actual value for these metrics.  Please help!
Hi Everyone, I am using Splunk enterprise free version and it stops every hour and gives me "missing or malformed messages.conf stanza  AUDIT :start_of_event_drops ", knowing that I don't use univer... See more...
Hi Everyone, I am using Splunk enterprise free version and it stops every hour and gives me "missing or malformed messages.conf stanza  AUDIT :start_of_event_drops ", knowing that I don't use universal forwarder. Any solutions?
I want to count the number of occurrence of a specific JSON structure. For example in my event there is a field called data which its value is JSON . but this field can have a variety of structures. ... See more...
I want to count the number of occurrence of a specific JSON structure. For example in my event there is a field called data which its value is JSON . but this field can have a variety of structures. like: data = {a: "b"} data= {d: "x", h: "e"}   now I want to know how many event has data with each JSON structure and I don't care about values only keys are matter.  So I want to count JSON that has similar keys.
Hi All, I created a heatmap which includes hourly data for a day with addcoltotals at the end. The heatmap X axis is not aligned according the value it populates. Initially I thought it is due to th... See more...
Hi All, I created a heatmap which includes hourly data for a day with addcoltotals at the end. The heatmap X axis is not aligned according the value it populates. Initially I thought it is due to the addcoltotals value but even after removing it the X axis is not aligned properly. If you see the marker of X axis which I have encircled in red it moves to further and further as it progress to next hour. So finally if you see the 23:00 is shown near the totals blocks.  Please let me know whether I can align the X axis according the heatmap blocks also it is possible to show the totals which shouldn't affect the aligned of the x axis.
We are forwarding IIS logs from UFs to a heavy forwarder, and the heavy forwarder is supposed to send them on to a 3rd party. I've confirmed using packet captures that the UFs are getting the logs to... See more...
We are forwarding IIS logs from UFs to a heavy forwarder, and the heavy forwarder is supposed to send them on to a 3rd party. I've confirmed using packet captures that the UFs are getting the logs to the heavy forwarder, but for some reason, the HF isn't sending them on. It is, however, sending other data sources I've configured, like WinEventLogs and DHCP logs. Does anyone know what might be causing those specific logs to stall at the HF? I'm really stumped by this one. There is one caveat, further below, where we can get it to work, but it's not optimal. Here's basically how we have it configured: UFs clone all data by using two tcpout groups: (1) send to indexers and (2) send to HF HFs does this: [indexAndForwarder] is set to "false" Supposed to send only WinEventLog, DHCP logs, and IIS logs, filtering out everything else ## props.conf ## [source::WinEventLog:*] - this works TRANSFORMS-routing = 3rdpartyOut [DhcpSrvLog] - this works TRANSFORMS-routing = 3rdpartyOut [ms:iis:auto] - does not work - also tried using the source instead of using sourcetype TRANSFORMS-routing = 3rdpartyOut ## transforms.conf ## [3rdpartyOut] REGEX = . SOURCE_KEY = MetaData:Host DEST_KEY = _SYSLOG_ROUTING FORMAT = 3rdparty ## outputs.conf ## # Defaults - routes everything to "nothing" by default [syslog] defaultGroup=nothing [syslog:3rdparty] sendCookedData=false server = x.x.x.x:xxxx Couple of random notes: - We are using a separate transforms and props to manually tag all IIS logs with "IISWebLog" (thanks to someone on this forum for help with that) - If we actually use the inputs.conf on the TA we built for UFs, that tells those UFs to clone their data for the heavy forwarder, to start monitoring IIS logs (in other words, not changing anything on the HF), it actually works - the logs get ingested to Splunk, and sent all the way to the 3rd party; but for some reason, they don't get the "IISWebLog" tag. This also gets pushed out to way more servers than we actually want to monitor IIS for, so this wouldn't be ideal anyway. But it's interesting that it somehow gets the logs all the way to the end. Thank you for any help!