All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Even after enabling move_policy=sinkhole, why is data still in there, verified that the path included in the monitor is not included in batch stanza
Hi, Splunkers,   I used the following code to change my dropdown input width,    .input-dropdown { min-width: 120px !important; width: 120px !important; max-wi... See more...
Hi, Splunkers,   I used the following code to change my dropdown input width,    .input-dropdown { min-width: 120px !important; width: 120px !important; max-width: 120px !important; } .splunk-dropdown .select2-container { min-width: 120px !important; width: 120px !important; max-width: 120px !important; } when I changed width, the width of the dropdown area decreased,  but the dropdown field width not change, which caused the dropdown overlapped with the next dropdown. I tried some different combination of these widths,  also the following HTML  text/css,   also margin-bottom,  etc, etc,  but whatever I tried,  only the width of  whole area dropdown changed,  never worked for   the dropdown box width.  I also tried the following CSS  code,  but have the same issue. <html> <style type="text/css"> #input_unit { width: 440px; } </style> </html> thx in advance.   Kevin
I am working on a school project to gather temperature data from a room through a Raspberry Pi. This data comes from a BME280 sensor and is relayed through python to output the temperature. I want to... See more...
I am working on a school project to gather temperature data from a room through a Raspberry Pi. This data comes from a BME280 sensor and is relayed through python to output the temperature. I want to be able to forward this data to Splunk and display it real time using Splunk Ar.  Does anyone know how I would be able to get my data from my Raspberry Pi to my Splunk Enterprise? 
Hi,  I have clustered multi-site indexing architecture with search head cluster.  I am getting the fortinet logs as below:  Fortinet ==> Syslog ==> HF monitor the logs >> Indexers (index discov... See more...
Hi,  I have clustered multi-site indexing architecture with search head cluster.  I am getting the fortinet logs as below:  Fortinet ==> Syslog ==> HF monitor the logs >> Indexers (index discovery) I installed the fortinet add-on on all indexers and searchheads  I still see logs coming under the sourcetype I defined in the inputs.conf for monitoring  I added below a list of apps I pushed to Peers and SHs @fortinet  @fortinet1  
I had the following scenario working in one clustered environment, using physical servers: 1. Route data to an index based on the value found in a raw data.  This is achieved by, using props and ... See more...
I had the following scenario working in one clustered environment, using physical servers: 1. Route data to an index based on the value found in a raw data.  This is achieved by, using props and transforms conf that are deployed within a parsing app, that looks something like this: props.conf [a_somercetype] TRANSFORMS-index_routing = a_index_routing [b_sourcetype] TRANSFORMS-index_routing = b_index_routing transforms.conf: [index_routing] SOURCE_KEY = _raw REGEX = ^\d{4}\-\d{2}-\d{2}T\d{2}\:\d{2}\:\d{2}\.\d+\+\d{2}\:\d{2}\s\w+\.\w+\.bb\-(?<field1>\w+?)\- DEST_KEY = _MetaData:Index FORMAT = index_name_$1 note: field1 is where value a or b will appear There is also inputs.conf on the deployment server that pushes the config with correct index and sourcetype to the forwarder. This used to work without any issues. In fact still does in one of the clustered environment. But it doesn't work in the new test clustered environment as the data gets sent to main index instead of the indexes specified in props and transforms. Is there a setting on the indexer or elsewhere that could stop this from working?
Hey,  I am dealing with data from an app, and I am trying to figure out how to see what times of the day our app is most popular by hour. Im not sure how I can get an average of what times are popu... See more...
Hey,  I am dealing with data from an app, and I am trying to figure out how to see what times of the day our app is most popular by hour. Im not sure how I can get an average of what times are popular of when users start the app. If anyone could help, it would be greatly appreciated! Heres the query I have been using to see users starting sessions:   index=app1 AND service=app AND logLevel=INFO AND environment=prod "message.eventAction"=START_SESSION   Thanks!
My data is something like this:   stackTrace: [ { inProject: false, file: "/path/to/file.c" }, { inProject: true, file: "/path/to/file.c" }, { inProject: false, file: "/path/to... See more...
My data is something like this:   stackTrace: [ { inProject: false, file: "/path/to/file.c" }, { inProject: true, file: "/path/to/file.c" }, { inProject: false, file: "/path/to/file.c" } ]     I'd like to get the list of events where the first element that has inProject=true contains "file.c" in file.
I have a directory that is being monitored on a splunk heavy forwarder. /app_monitoring       The above directory will receive a file everyday called Report.csv there may be duplicate data i... See more...
I have a directory that is being monitored on a splunk heavy forwarder. /app_monitoring       The above directory will receive a file everyday called Report.csv there may be duplicate data in it that is already indexed, how to prevent duplicate indexing in this case? do i have to change anything in the inputs.conf in the app folder? please advise.
Hello, I'm trying to combine different events (with different fields) into one event based on a common field value.  Is there an easy way to do this?  For example: (index=data sourcetype=source1) ... See more...
Hello, I'm trying to combine different events (with different fields) into one event based on a common field value.  Is there an easy way to do this?  For example: (index=data sourcetype=source1) OR (index=customer sourcetype=sourcetype2) Event from Source 1: customer#: 12345 billingpackage: fastspeed speed: 50m Event from Source 2: customer#: 12345 address: 1st street noth zip: 41783 Desired Event: customer#: 12345 billingpackage: fastspeed speed: 50m address: 1st street north zip: 41783 Thanks in advance for the help!
Hi Splunkers, I'm trying to build my first dashboard and I've hit a wall, I can't find any mention of this elsewhere, can anyone help?   I'm trying to make a multiselect input with all elements... See more...
Hi Splunkers, I'm trying to build my first dashboard and I've hit a wall, I can't find any mention of this elsewhere, can anyone help?   I'm trying to make a multiselect input with all elements from a search, and dynamically select 10 of them (based on a field in the search).   I get a list of all the elements in the list from:  index=* | fields spID | dedup spID I can get the ones I want selected using: index=* | stats count(spID) as auths by spID | sort -auths limit=10 (this then spills over into a chart) the code I have so far is: <input type="multiselect" token="spPicker" searchWhenChanged="true"> <label>spPicker</label> <fieldForLabel>spID</fieldForLabel> <fieldForValue>spID</fieldForValue> <valuePrefix>"</valuePrefix> <valueSuffix>"</valueSuffix> <search> <query>index=* | fields spID | dedup spID</query> <earliest>$field1.earliest$</earliest> <latest>$field1.latest$</latest> </search> <delimiter>,</delimiter> </input> So this half works - all the elements are present in the list. I don't see a way of auto selecting the top 10 - I've tried <defaults> and <initialValues>, but these both want a static list. Any ideas anyone?   thanks in advance,   Jim    
Hello, We have some appliances data/logs, require me to send/receive those logs with SYSLOG. I have a server to receive those logs and I also know we need to use TCP/UDP port.   How would I proceed... See more...
Hello, We have some appliances data/logs, require me to send/receive those logs with SYSLOG. I have a server to receive those logs and I also know we need to use TCP/UDP port.   How would I proceed? What else I need to have, and those logs need to have any specific format?    Any help/recommendations will be highly appreciated. Thank you so much!  
Hello splunk community. As on today we have two queries that are running  Count of api grouped by apiName and status     index=aws* api.metaData.pid="myAppName" | rename api.p as apiName | c... See more...
Hello splunk community. As on today we have two queries that are running  Count of api grouped by apiName and status     index=aws* api.metaData.pid="myAppName" | rename api.p as apiName | chart count BY apiName "api.metaData.status" | multikv forceheader=1 | table apiName success error NULL   Which displays a table something like shown below ===================================== | apiName            || success || error              || NULL.   | ==================================== | Test1                   || 10            || 20.                  || 0            | | Test2                   || 10            || 20.                  || 0            | | Test3                   || 10            || 20.                  || 0            | | Test4                   || 10            || 20.                  || 0            | | Test5                   || 10            || 20.                  || 0            | | Test6                   || 10            || 20.                  || 0            | latency of api grouped by apiName   index=aws* api.metaData.pid="myAppName" | rename api.p as apiName | rename api.measures.tt as Response_Time | chart min(Response_Time) as RT_fastest max(Response_Time) as RT_slowest by apiName | table apiName RT_fastest RT_slowest   which displays a table something like below ================================== | apiName            || RT_fastest || RT_slowest               ================================== | Test1                   || 10                  || 20.                  | | Test2                   || 10                  || 20.                  | | Test3                   || 10                  || 20.                  | | Test4                   || 10                  || 20.                  | | Test5                   || 10                  || 20.                  | | Test6                   || 10                  || 20.                  | Question: If you see the above tables, both tables are grouped with apiName. Is there a way to combine these queries so that i get a single result something like this |=============================================== | apiName || success || error || NULL || RT_fastest. || RT_slowest | =============================================== | Test1       || 10            || 20.     || 20.       || 20.                  || 20.                  || | Test2       || 10            || 20.     || 20.       || 20.                  || 20.                  || | Test3       || 10            || 20.     || 20.       || 20.                  || 20.                  || | Test4       || 10            || 20.     || 20.       || 20.                  || 20.                  || | Test5       || 10            || 20.     || 20.       || 20.                  || 20.                  ||   I could not find any documentation regarding combining multiple chart query into one. Could someone please help me with this. Thanks
Hello,  for a project I'm working on I would need to print (somehow) the outcome of | collect in order to see if the command was successful or not. The dashboard is basically manipulating some da... See more...
Hello,  for a project I'm working on I would need to print (somehow) the outcome of | collect in order to see if the command was successful or not. The dashboard is basically manipulating some data and then the updated version of the event is collected using an HTML button and javascript. It would be useful for the user to see the result of such action in order to understand when (and if) the command was completed successfully. Do you think this is feasible? Could it be a way using somehow the $job.messages$ token? Thanks in advance for your kind support.
I have an event with multiple levels of nested objects and lists, that I need to break down into individual events. For example, a single event can look like:   And I need to conver that eve... See more...
I have an event with multiple levels of nested objects and lists, that I need to break down into individual events. For example, a single event can look like:   And I need to conver that event into a table like this: Group_name Sub_group Subsubgroup Some other info … alpha alpha1 beta   alpha alpha1 gamma   alpha alpha2 a   alpha alpha2 b   alpha alpha3 uno     I've tried multiple combinations of mvexpand, table, and stats, but I keep getting erroneous results. The command flatten doesn't seem to work, and I fear I might need some crazy regex to parse all the embedded objects and list of objects, not to mention this is only one event, in reality I would have multiple other groups with their corresponding subgroups and stuff. 
Does anyone have the debian installer version 7.2.x? I need to update an older 6.5 installation and version 7 is no longer available in the official channels? Thank in advance
I need guidance on which interface on a particular  Cisco router to monitor in Splunk. The goal is to only monitor the necessary interfaces to cut down on alerts that are not as meaningful. Please ... See more...
I need guidance on which interface on a particular  Cisco router to monitor in Splunk. The goal is to only monitor the necessary interfaces to cut down on alerts that are not as meaningful. Please advise.
Hello Splunk community. I have a query that is running currently as shown below:   index=myIndex* api.metaData.pid="my_plugin_id" | rename api.p as apiName | chart count BY apiName "api.metaData.... See more...
Hello Splunk community. I have a query that is running currently as shown below:   index=myIndex* api.metaData.pid="my_plugin_id" | rename api.p as apiName | chart count BY apiName "api.metaData.status" | multikv forceheader=1 | table apiName success error NULL | eval line=printf("%-85s% 10s% 10s% 7s",apiName, success, error, NULL) | stats list(line) as line | eval headers=printf("%-85s% 10s% 10s% 7s","API Name","Success","Error", "NULL") | eval line=mvappend(headers,line) | fields - headers Which displays a table with "API Name","Success","Error", "NULL" counts.   This works as expected. Now i want to add a new column in the table which displays the latency value (tp95 and tp99) for each apiName . The time taken by each api is in the field api.metadata.tt. How can i achieve this ? I am new to splunk and I am literally stuck at this point. Could someone please help me. Thank you Info: Just to let you guys know, my query has these additional logic to format things because of related question here
Hi,   I have installed splunk 8.1.8 on my linux. On login to splunk UI, I have gone through apps and installed splunk_db_connect by uploading the zip file splunk-db-connect_380.zip.    Then w... See more...
Hi,   I have installed splunk 8.1.8 on my linux. On login to splunk UI, I have gone through apps and installed splunk_db_connect by uploading the zip file splunk-db-connect_380.zip.    Then when I go to splunk_db_connect and setup. It is giving errors like  "cant communicate with task server, please check your settings" "str object have no attribute decode"   etc error messages in the UI. On restarting splunk and on running btool I see the below message  *************** Checking: /opt/splunk/etc/apps/splunk_app_db_connect/default/inputs.conf Invalid key in stanza [server] in /opt/splunk/etc/apps/splunk_app_db_connect/default/inputs.conf, line 2: run_only_one (value: false). Invalid key in stanza [dbxquery] in /opt/splunk/etc/apps/splunk_app_db_connect/default/inputs.conf, line 5: run_only_one (value: false). ***************   In logs, I see only one log file related to splunk_db_connect. The file name is splunk_app_db_connect_dbx.log. I cant see other files. In this log file I see below errors  ******************** 2022-02-18T06:45:18-0600 [ERROR] [settings.py], line 89 : Throwing an exception Traceback (most recent call last): File "/opt/splunk/etc/apps/splunk_app_db_connect/bin/dbx2/rest/settings.py", line 76, in handle_POST self.validate_java_home(payload["javaHome"]) File "/opt/splunk/etc/apps/splunk_app_db_connect/bin/dbx2/rest/settings.py", line 215, in validate_java_home is_valid, reason = validateJRE(java_cmd) File "/opt/splunk/etc/apps/splunk_app_db_connect/bin/dbx2/jre_validator.py", line 73, in validateJRE output = output.decode('utf-8') AttributeError: 'str' object has no attribute 'decode' ************** Below is the snapshot of splunk_db_connect settings in the UI   Please help me how to resolve the issues and setup splunk DB connect succesfully.   Thanking you.
We are using SAP Business Technology Platform(Cloud Foundry) as PAAS and our java and node.js applications are deployed on Cloud Foundry Platform. we want to drain application logs to Splunk Observa... See more...
We are using SAP Business Technology Platform(Cloud Foundry) as PAAS and our java and node.js applications are deployed on Cloud Foundry Platform. we want to drain application logs to Splunk Observability Cloud. Please provide implementation steps. Currently we are using Kibana service for log monitoring on SAP Business Technology Platform(Cloud Foundry). Now we want to drain syslog and application log to Splunk Observability Cloud from SAP Business Technology Platform(Cloud Foundry). We need all necessary steps to set-up integration from SAP Business Technology Platform(Cloud Foundry) to Splunk Observability Cloud. We want to use Infrastructure Monitoring, Application Performance Monitoring, Application Log monitoring(Splunk Log Observer), Splunk Synthetic Monitoring and Splunk Real User Monitoring features of Splunk Observability Cloud. We like features of Splunk Observability Cloud but we don't know about integration set-up of Splunk Observability Cloud with Cloud foundry Platform application. We are new to splunk and want to do simple PoC on it with integration set-up. We need your help here so that we can take decision to use Splunk Observability Cloud in our all products for monitoring. If Splunk Observability Cloud integration with Cloud foundry Platform not possible then give us alternate ways to do PoC. Let us know if some one have done integration of Splunk Observability Cloud with Cloud foundry Platform.  
Hi , We are making a service availibility dashboard based on the below formula . Could you please help me implement this as a SPL ?   Availability Calculation of a service will be as follows- ... See more...
Hi , We are making a service availibility dashboard based on the below formula . Could you please help me implement this as a SPL ?   Availability Calculation of a service will be as follows-   Availability = (Total Availability hours – [ (End time of first P1 -Start Time of first  P1) + (End time of second P1 -Start Time of second  P1)+………])*100 /Total Availability Hours