All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello Guys, We have to remove some of the fields permanently. Is there any configuration file or something to remove fields from backend ?    Note : We are not looking for "fields - " to remove the... See more...
Hello Guys, We have to remove some of the fields permanently. Is there any configuration file or something to remove fields from backend ?    Note : We are not looking for "fields - " to remove the fields in search..
I want to control the number of concurrent user searches on an app-by-app basis. I think it is possible to control the number of concurrent executions on a role-by-role basis, but is it possible to ... See more...
I want to control the number of concurrent user searches on an app-by-app basis. I think it is possible to control the number of concurrent executions on a role-by-role basis, but is it possible to control on an app-by-app basis? If control is possible on an app-by-app basis, please tell me how to control it. (I think it is feasible by distributing limits.conf (base_max_searches) under App.)
Hello splunkies! I'm trying to be and admin and I'm trying to add data manually to my inputs.conf,  please see my scenario: path: /logfiles/syslog/log.txt The output from a script that contacts ... See more...
Hello splunkies! I'm trying to be and admin and I'm trying to add data manually to my inputs.conf,  please see my scenario: path: /logfiles/syslog/log.txt The output from a script that contacts an internal REST API. There are two kinds of requests in this file: 1 . http://localhost:8080/api/requests/xTraining.json API shows data from the non-production host and should be written to Index = API-NPTraining 2. http://localhost:8080/api/requests/Training.json api shows data from the production host and should be written to index = API-PTraining Both should use sourcetype ss:training Data in this file will rotate daily to log.txt.1020, log.txt.1021...etc   I have my stanzas like this #first stanza [monitor:///logfiles/syslog/log*.txt] disabled = 0  host = http://localhost:8080/api/requests/xTraining.json index = API-NPTraining sourcetype = ss:training   # second Stanza [monitor:///logfiles/syslog/log*.txt] disabled = 0  host = http://localhost:8080/api/requests/Training.json index = API-PTraining sourcetype = ss:training   What am I missing?  Am I wrong in something?  thank you.    
Hi, Splunkers, I have a Search with an input token ,  which is not working in my query in dashboard t_TargetType is token name.       | search AFRoute=if($t_TargetType|s$ == "A","true","*") w... See more...
Hi, Splunkers, I have a Search with an input token ,  which is not working in my query in dashboard t_TargetType is token name.       | search AFRoute=if($t_TargetType|s$ == "A","true","*") when token has value  A,       | search AFRoute=if("A" == "A","true","*"),  which I assume is equal to  | search AFRoute="true". but when I directly run a search  with  | search AFRoute=if("A" == "A","true","*") , it doesn't  work  same as | search AFRoute="true". what's the difference between | search AFRoute=if("A" == "A","true","*") and  | search AFRoute="true"?    Kevin
Need to Data Balancing after upsizing of indexer ??
I work at a company in Brazil that is a Splunk enterprise customer. I am trying to request a Dev/test license to install in an environment that is already running as a test on the Splunk Enterprise ... See more...
I work at a company in Brazil that is a Splunk enterprise customer. I am trying to request a Dev/test license to install in an environment that is already running as a test on the Splunk Enterprise 60 days license. Can I insert the test license "on top" of this free license? And how do I request the test license? The site is returning page not found error 404 (http://splunk.com/dev-test?_ga=2.8916146.684201008.1645269262-1587620948.1640209718) https://www.splunk.com/dev-test Thank you
I am unable to open Splunk Enterprise, getting error as  This site can’t be reached 127.0.0.1 refused to connect. ERR_CONNECTION_REFUSED   I have checked the proxy and firewall It... See more...
I am unable to open Splunk Enterprise, getting error as  This site can’t be reached 127.0.0.1 refused to connect. ERR_CONNECTION_REFUSED   I have checked the proxy and firewall It worked after i did repair the application but again i am getting the error.   i think after windows update it is not working as the after installation and repair windows did an automatic update   Any suggestions on resolving the issue.
Hello splunkies! I'm trying to be and admin and I'm doing an exercise but I cannot find the way to configure my inputs.conf here the exercise: path:  /logfiles/syslog/training-nix01.txt This file... See more...
Hello splunkies! I'm trying to be and admin and I'm doing an exercise but I cannot find the way to configure my inputs.conf here the exercise: path:  /logfiles/syslog/training-nix01.txt This file will be updated continuously and will roll daily to training-nix01.1, training-nix01.1 etc Data from these files should be written to Index: Training Sourcetype: tp:tr    any ideas?
Even after enabling move_policy=sinkhole, why is data still in there, verified that the path included in the monitor is not included in batch stanza
Hi, Splunkers,   I used the following code to change my dropdown input width,    .input-dropdown { min-width: 120px !important; width: 120px !important; max-wi... See more...
Hi, Splunkers,   I used the following code to change my dropdown input width,    .input-dropdown { min-width: 120px !important; width: 120px !important; max-width: 120px !important; } .splunk-dropdown .select2-container { min-width: 120px !important; width: 120px !important; max-width: 120px !important; } when I changed width, the width of the dropdown area decreased,  but the dropdown field width not change, which caused the dropdown overlapped with the next dropdown. I tried some different combination of these widths,  also the following HTML  text/css,   also margin-bottom,  etc, etc,  but whatever I tried,  only the width of  whole area dropdown changed,  never worked for   the dropdown box width.  I also tried the following CSS  code,  but have the same issue. <html> <style type="text/css"> #input_unit { width: 440px; } </style> </html> thx in advance.   Kevin
I am working on a school project to gather temperature data from a room through a Raspberry Pi. This data comes from a BME280 sensor and is relayed through python to output the temperature. I want to... See more...
I am working on a school project to gather temperature data from a room through a Raspberry Pi. This data comes from a BME280 sensor and is relayed through python to output the temperature. I want to be able to forward this data to Splunk and display it real time using Splunk Ar.  Does anyone know how I would be able to get my data from my Raspberry Pi to my Splunk Enterprise? 
Hi,  I have clustered multi-site indexing architecture with search head cluster.  I am getting the fortinet logs as below:  Fortinet ==> Syslog ==> HF monitor the logs >> Indexers (index discov... See more...
Hi,  I have clustered multi-site indexing architecture with search head cluster.  I am getting the fortinet logs as below:  Fortinet ==> Syslog ==> HF monitor the logs >> Indexers (index discovery) I installed the fortinet add-on on all indexers and searchheads  I still see logs coming under the sourcetype I defined in the inputs.conf for monitoring  I added below a list of apps I pushed to Peers and SHs @fortinet  @fortinet1  
I had the following scenario working in one clustered environment, using physical servers: 1. Route data to an index based on the value found in a raw data.  This is achieved by, using props and ... See more...
I had the following scenario working in one clustered environment, using physical servers: 1. Route data to an index based on the value found in a raw data.  This is achieved by, using props and transforms conf that are deployed within a parsing app, that looks something like this: props.conf [a_somercetype] TRANSFORMS-index_routing = a_index_routing [b_sourcetype] TRANSFORMS-index_routing = b_index_routing transforms.conf: [index_routing] SOURCE_KEY = _raw REGEX = ^\d{4}\-\d{2}-\d{2}T\d{2}\:\d{2}\:\d{2}\.\d+\+\d{2}\:\d{2}\s\w+\.\w+\.bb\-(?<field1>\w+?)\- DEST_KEY = _MetaData:Index FORMAT = index_name_$1 note: field1 is where value a or b will appear There is also inputs.conf on the deployment server that pushes the config with correct index and sourcetype to the forwarder. This used to work without any issues. In fact still does in one of the clustered environment. But it doesn't work in the new test clustered environment as the data gets sent to main index instead of the indexes specified in props and transforms. Is there a setting on the indexer or elsewhere that could stop this from working?
Hey,  I am dealing with data from an app, and I am trying to figure out how to see what times of the day our app is most popular by hour. Im not sure how I can get an average of what times are popu... See more...
Hey,  I am dealing with data from an app, and I am trying to figure out how to see what times of the day our app is most popular by hour. Im not sure how I can get an average of what times are popular of when users start the app. If anyone could help, it would be greatly appreciated! Heres the query I have been using to see users starting sessions:   index=app1 AND service=app AND logLevel=INFO AND environment=prod "message.eventAction"=START_SESSION   Thanks!
My data is something like this:   stackTrace: [ { inProject: false, file: "/path/to/file.c" }, { inProject: true, file: "/path/to/file.c" }, { inProject: false, file: "/path/to... See more...
My data is something like this:   stackTrace: [ { inProject: false, file: "/path/to/file.c" }, { inProject: true, file: "/path/to/file.c" }, { inProject: false, file: "/path/to/file.c" } ]     I'd like to get the list of events where the first element that has inProject=true contains "file.c" in file.
I have a directory that is being monitored on a splunk heavy forwarder. /app_monitoring       The above directory will receive a file everyday called Report.csv there may be duplicate data i... See more...
I have a directory that is being monitored on a splunk heavy forwarder. /app_monitoring       The above directory will receive a file everyday called Report.csv there may be duplicate data in it that is already indexed, how to prevent duplicate indexing in this case? do i have to change anything in the inputs.conf in the app folder? please advise.
Hello, I'm trying to combine different events (with different fields) into one event based on a common field value.  Is there an easy way to do this?  For example: (index=data sourcetype=source1) ... See more...
Hello, I'm trying to combine different events (with different fields) into one event based on a common field value.  Is there an easy way to do this?  For example: (index=data sourcetype=source1) OR (index=customer sourcetype=sourcetype2) Event from Source 1: customer#: 12345 billingpackage: fastspeed speed: 50m Event from Source 2: customer#: 12345 address: 1st street noth zip: 41783 Desired Event: customer#: 12345 billingpackage: fastspeed speed: 50m address: 1st street north zip: 41783 Thanks in advance for the help!
Hi Splunkers, I'm trying to build my first dashboard and I've hit a wall, I can't find any mention of this elsewhere, can anyone help?   I'm trying to make a multiselect input with all elements... See more...
Hi Splunkers, I'm trying to build my first dashboard and I've hit a wall, I can't find any mention of this elsewhere, can anyone help?   I'm trying to make a multiselect input with all elements from a search, and dynamically select 10 of them (based on a field in the search).   I get a list of all the elements in the list from:  index=* | fields spID | dedup spID I can get the ones I want selected using: index=* | stats count(spID) as auths by spID | sort -auths limit=10 (this then spills over into a chart) the code I have so far is: <input type="multiselect" token="spPicker" searchWhenChanged="true"> <label>spPicker</label> <fieldForLabel>spID</fieldForLabel> <fieldForValue>spID</fieldForValue> <valuePrefix>"</valuePrefix> <valueSuffix>"</valueSuffix> <search> <query>index=* | fields spID | dedup spID</query> <earliest>$field1.earliest$</earliest> <latest>$field1.latest$</latest> </search> <delimiter>,</delimiter> </input> So this half works - all the elements are present in the list. I don't see a way of auto selecting the top 10 - I've tried <defaults> and <initialValues>, but these both want a static list. Any ideas anyone?   thanks in advance,   Jim    
Hello, We have some appliances data/logs, require me to send/receive those logs with SYSLOG. I have a server to receive those logs and I also know we need to use TCP/UDP port.   How would I proceed... See more...
Hello, We have some appliances data/logs, require me to send/receive those logs with SYSLOG. I have a server to receive those logs and I also know we need to use TCP/UDP port.   How would I proceed? What else I need to have, and those logs need to have any specific format?    Any help/recommendations will be highly appreciated. Thank you so much!  
Hello splunk community. As on today we have two queries that are running  Count of api grouped by apiName and status     index=aws* api.metaData.pid="myAppName" | rename api.p as apiName | c... See more...
Hello splunk community. As on today we have two queries that are running  Count of api grouped by apiName and status     index=aws* api.metaData.pid="myAppName" | rename api.p as apiName | chart count BY apiName "api.metaData.status" | multikv forceheader=1 | table apiName success error NULL   Which displays a table something like shown below ===================================== | apiName            || success || error              || NULL.   | ==================================== | Test1                   || 10            || 20.                  || 0            | | Test2                   || 10            || 20.                  || 0            | | Test3                   || 10            || 20.                  || 0            | | Test4                   || 10            || 20.                  || 0            | | Test5                   || 10            || 20.                  || 0            | | Test6                   || 10            || 20.                  || 0            | latency of api grouped by apiName   index=aws* api.metaData.pid="myAppName" | rename api.p as apiName | rename api.measures.tt as Response_Time | chart min(Response_Time) as RT_fastest max(Response_Time) as RT_slowest by apiName | table apiName RT_fastest RT_slowest   which displays a table something like below ================================== | apiName            || RT_fastest || RT_slowest               ================================== | Test1                   || 10                  || 20.                  | | Test2                   || 10                  || 20.                  | | Test3                   || 10                  || 20.                  | | Test4                   || 10                  || 20.                  | | Test5                   || 10                  || 20.                  | | Test6                   || 10                  || 20.                  | Question: If you see the above tables, both tables are grouped with apiName. Is there a way to combine these queries so that i get a single result something like this |=============================================== | apiName || success || error || NULL || RT_fastest. || RT_slowest | =============================================== | Test1       || 10            || 20.     || 20.       || 20.                  || 20.                  || | Test2       || 10            || 20.     || 20.       || 20.                  || 20.                  || | Test3       || 10            || 20.     || 20.       || 20.                  || 20.                  || | Test4       || 10            || 20.     || 20.       || 20.                  || 20.                  || | Test5       || 10            || 20.     || 20.       || 20.                  || 20.                  ||   I could not find any documentation regarding combining multiple chart query into one. Could someone please help me with this. Thanks