All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I have a data set  of events with ID numbers (every time an event happens an entry is made in the table and each type of event has an ID)  from which, I'm getting the count by ID number. I need to ta... See more...
I have a data set  of events with ID numbers (every time an event happens an entry is made in the table and each type of event has an ID)  from which, I'm getting the count by ID number. I need to take the average of daily counts from the last 30 days and compare it to the current 24 hour count. I have a feeling timewarp could be useful for this but not sure how, any suggestions?
I already have the following macro  `subnet(3)`  defined as the following:     | eval subnet = case(cidrmatch("$ip1$/24",src_ip), "$output_name$", cidrmatch("$ip2$",src_ip), "$output_name$")    ... See more...
I already have the following macro  `subnet(3)`  defined as the following:     | eval subnet = case(cidrmatch("$ip1$/24",src_ip), "$output_name$", cidrmatch("$ip2$",src_ip), "$output_name$")       If I call the macro multiple in the same search the value of the field it creates (also called subnet) will be overwritten by the latest values.   I would like to edit the macro so that calling it multiple times appends a new value to subnet.  How could I use mvappend, or another command, to accomplish this?
Hello, I am trying to create an Oracle database collector with the REST API. I don't seem to be able to get it working. The example online is of a SQL server, so I was hoping someone could please pr... See more...
Hello, I am trying to create an Oracle database collector with the REST API. I don't seem to be able to get it working. The example online is of a SQL server, so I was hoping someone could please provide an example of an Oracle server JSON payload. Below is what I get if I do a GET. Also, our controller version is 4.5, so not sure if maybe that is part of the problem. The error I get is just a generic 500 Internal Server Error. { "id": 42, "version": 0, "name": "Oracle-OLTP", "nameUnique": true, "builtIn": false, "createdBy": null, "createdOn": 1531336844000, "modifiedBy": null, "modifiedOn": 1531336844000, "type": "ORACLE", "hostname": "Oracle-OLTP.domain", "useWindowsAuth": false, "username": "USER", "password": "appdynamics_redacted_password", "port": 1521, "loggingEnabled": true, "enabled": true, "excludedSchemas": [ "" ], "jdbcConnectionProperties": [], "databaseName": null, "failoverPartner": null, "connectAsSysdba": false, "useServiceName": true, "sid": "SRVC", "customConnectionString": null, "enterpriseDB": false, "useSSL": false, "enableOSMonitor": false, "hostOS": null, "useLocalWMI": false, "hostDomain": null, "hostUsername": null, "hostPassword": "", "dbInstanceIdentifier": null, "region": null, "certificateAuth": false, "removeLiterals": true, "sshPort": 0, "agentName": "Default Database Agent", "dbCyberArkEnabled": false, "dbCyberArkApplication": null, "dbCyberArkSafe": null, "dbCyberArkFolder": null, "dbCyberArkObject": null, "hwCyberArkEnabled": false, "hwCyberArkApplication": null, "hwCyberArkSafe": null, "hwCyberArkFolder": null, "hwCyberArkObject": null, "orapkiSslEnabled": false, "orasslClientAuthEnabled": false, "orasslTruststoreLoc": null, "orasslTruststoreType": null, "orasslTruststorePassword": "", "orasslKeystoreLoc": null, "orasslKeystoreType": null, "orasslKeystorePassword": "", "ldapEnabled": false, "customMetrics": null, "subConfigs": [], "jmxPort": 0 }
Hi, I have database table and anomaly table. Both tables have a field database_id. Now I am interested in the status and confidence fields in anomaly table as well as data_source and ip fields in da... See more...
Hi, I have database table and anomaly table. Both tables have a field database_id. Now I am interested in the status and confidence fields in anomaly table as well as data_source and ip fields in database table. I want to combine them into one table based on the database_id. I tried some queries like below but its result was not as expected.        index=anomalies | JOIN type=left database_id [SEARCH index=assets] | fields anomaly_id, confidence, current_status, database_id, source_type, ip        How could I write a query that returns a table showing the info for all anomalies as well as the database info related to that anomaly using database_id as a bridge?  Thank you in advance! Regards,
Hey Splunkers! We have multiple IDX/SH clusters that are peered based on regulatory/compliance/operational reasons. We have a specific SHC that we would like to de-peer from an older IDX cluster. In... See more...
Hey Splunkers! We have multiple IDX/SH clusters that are peered based on regulatory/compliance/operational reasons. We have a specific SHC that we would like to de-peer from an older IDX cluster. Indexes are reused and migrated across different IDX clusters frequently. What is the fastest and most accurate way to see what data is being fetched from the IDX clusters by a SHC? Thanks in Advance!
How would I go about forming a query to search within a specific directory? Suppose I want to search for files
hi how can i use lookup without show it in place. e.g. when move mouse over 404 just show tool tip that show "page not found" https://docs.splunk.com/Documentation/Splunk/8.2.1/Knowledge/Configure... See more...
hi how can i use lookup without show it in place. e.g. when move mouse over 404 just show tool tip that show "page not found" https://docs.splunk.com/Documentation/Splunk/8.2.1/Knowledge/ConfigureCSVlookups   any idea? Thanks
Hi, I'm trying to get the total duration of events  for each user from access logs with time gap.  sample event: _time user  2021-06-30 00:00:26   user1 22021-06-30 01:00:26 user1 22021-06-30 ... See more...
Hi, I'm trying to get the total duration of events  for each user from access logs with time gap.  sample event: _time user  2021-06-30 00:00:26   user1 22021-06-30 01:00:26 user1 22021-06-30 01:00:26 user1 22021-06-30 01:20:26 user1 and then there are no events for 4 hours   22021-06-30 05:00:26 user1 22021-06-30 05:30:26 user1 22021-06-30 06:02:26 user1 I'm trying to calculate the total duration of day. Any ideas how to achieve this?    
500 and 504 are shown here - but i'd like to condense them to one column="5xx" (same with 400, where all 4% responses would be shown under "4xx"     <panel> <table> <title>Fu... See more...
500 and 504 are shown here - but i'd like to condense them to one column="5xx" (same with 400, where all 4% responses would be shown under "4xx"     <panel> <table> <title>Functions Statistics by ResponseCode</title> <search base="base_search3"> <query> stats sum(count) as Count sum(S) as Success sum(F) as Failures avg(Avg_ResponseTime) as Average_ResponseTime by _time FNAME CODE | eval Availability=(Success/(Success+Failures))*100 | chart count by FNAME CODE </query> </search> <option name="count">15</option> <option name="drilldown">none</option> <option name="refresh.display">progressbar</option> </table> </panel>     the above is the relevant code
After upgrading to 8.2.0 we needed to upgrade eStreamer to a version that supports 8.2, the old 3.8.x version no longer worked. I've been struggling for 4.6.0 for weeks now.  I can't get it to work ... See more...
After upgrading to 8.2.0 we needed to upgrade eStreamer to a version that supports 8.2, the old 3.8.x version no longer worked. I've been struggling for 4.6.0 for weeks now.  I can't get it to work at all.  When I go to the overview page its blank, when I force myself to the setup URL i just get a "i am legend" message with no ability to configure inputs like I used to. Launch App button, totally blank page (http://splunk/en-US/app/TA-eStreamer/info_overview) No setup button anymore, but forced via old url (http://splunk/en-US/manager/TA-eStreamer/apps/local/TA-eStreamer/setup?action=edit)   spencore.sh test works just fine: -bash-4.2$ /opt/splunk/etc/apps/TA-eStreamer/bin/splencore.sh test 2021-06-30T14:10:31.395618 Diagnostics INFO Checking that configFilepath (estreamer.conf) exists 2021-06-30 14:10:31,414 Diagnostics INFO Check certificate 2021-06-30 14:10:31,414 Diagnostics INFO Creating connection 2021-06-30 14:10:31,415 Connection INFO Connecting to 1.2.3.4:8302 2021-06-30 14:10:31,415 Connection INFO Using TLS v1.2 2021-06-30 14:10:31,569 Diagnostics INFO Creating request message 2021-06-30 14:10:31,570 Diagnostics INFO Request message=b'0001000200000008ffffffff48900061' 2021-06-30 14:10:31,570 Diagnostics INFO Sending request message 2021-06-30 14:10:31,570 Diagnostics INFO Receiving response message 2021-06-30 14:10:31,581 Diagnostics INFO Response message=b'gAN9cQAoWAcAAAB2ZXJzaW9ucQFLAVgLAAAAbWVzc2FnZVR5cGVxAk0DCFgGAAAAbGVuZ3RocQNLMFgEAAAAZGF0YXEEQzAAABOBBBBBBBBBBBBTiABBBBBBBBBBBBGgsAAAAIAAAAAAAAAABxBXUu' 2021-06-30 14:10:31,581 Diagnostics INFO Streaming info response 2021-06-30 14:10:31,581 Diagnostics INFO Connection successful I followed this guide, i'm on the last step where i need ot check "is enabled" but cannot since the setup page wont load.  https://www.cisco.com/c/en/us/td/docs/security/firepower/670/api/eStreamer_enCore/eStreamereNcoreSplunkOperationsGuide_409.html
We are in the process of migrating a lot of hosts to report to a new deployment server. The deploymentclient.conf file was changed to reflect the IP address of the new deployment server and the hosts... See more...
We are in the process of migrating a lot of hosts to report to a new deployment server. The deploymentclient.conf file was changed to reflect the IP address of the new deployment server and the hosts phones home to the new new deployment server and we verify logs coming in. Then after some time, something modifies the deploymentclient.conf file to have the hosts report back to the old deployment server. We cannot seem to figure out what is making this change. We have uninstalled and reinstalled the universal forwarder on a test client this past week and everything was fine. Then the same thing happened yesterday and the host is reporting back to the old deployment server. This is happening on some hosts, not all. The ones that do not have this problem are reporting to the same new deployment server with no problems. Any suggestions would be helpful
Hi all, I'm working on a dashboard query that preprocesses data for a | geostats command. The end goal is to pipe data between two different but similar applications. The value used to populate th... See more...
Hi all, I'm working on a dashboard query that preprocesses data for a | geostats command. The end goal is to pipe data between two different but similar applications. The value used to populate the field 'country' has different format between the two applications, and if it's possible to create a conditional statement that uses different values from a lookup table (or even separate lookups) in line. Caveat: I know it's possible to build a different search and used a token with the | savedsearch command to pass the correct search based on the token. I'm trying to understand if it can be done inline in my base search. The details: application A's field associated with country uses the ISO numeric country code. The search then uses a lookup table like so:  | lookup [lookup].csv NumericCountryCode as [field in search] output AlphaCountryCode This takes the numeric code and produces a 2 character ISO like 'US' or 'BR' and so on. The format of the lookup table is: AlphaCountryCode CountryCode NumericCountryCode In application B, country data comes in uppercase ISO alphabetical format, but it can be represented as either the Alpha-2 or alpha-3 (3 digit) country code. In the majority of cases, this could be solved with a substring call that reduces the length to 2. However, there are fringe cases where this approach does not work, and must be accounted for.  Example: Angola - Alpha-2 code: AO alpha-3 code: AGO.  My proposed approach is to modify the lookup table like so: AlphaCountryCode AlphaTwoCountryCode AlphaThreeCountryCode NumericCountryCode Such that I can access both of these values and return the correct format (AlphaTwoCountryCode). Is it possible to handle this inline with the same lookup table? The pseudo code would be something like this: if(len(countryCode)==2), lower(countryCode) and return the countryCode. Else if(len(countryCode)==3), | lookup [lookup].csv AlphaThreeCountryCode as countryCode output AlphaTwoCountryCode Is this an appropriate approach, and if so, how can I build the syntax of the eval statement to evaluate length and output the appropriate result? Thanks  
    <query>"$ps_fn$" |rex field=message "(?&lt;Http&gt;HttpStatus): (?&lt;status&gt;\\d+)" | eval status=(status, "4%"),"4xx" | stats count by status</query> <earliest>$time.earliest$</ea... See more...
    <query>"$ps_fn$" |rex field=message "(?&lt;Http&gt;HttpStatus): (?&lt;status&gt;\\d+)" | eval status=(status, "4%"),"4xx" | stats count by status</query> <earliest>$time.earliest$</earliest> <latest>$time.latest$</latest>     I am trying to make a pie chart that shows all the 4xx errors, and then breaks them out by error - so x% was 401, y% was 402, etc. But i am getting Error in 'eval' command: The expression is malformed. Expected ). when i run this on the dashboard.
My CPU usage has increased, disk storage under stress, Splunkd very busy in the last few days? Does MC or Splunk Admin apps or even Meta Woot! help to find the hogs in my environment eating resources?
Looking to see if anyone is aware of a new app to take over for the Rundeck App Community Version as it is not compatible with 8.x/Python 3. It hasn't been updated since 2018.  https://splunkbase.sp... See more...
Looking to see if anyone is aware of a new app to take over for the Rundeck App Community Version as it is not compatible with 8.x/Python 3. It hasn't been updated since 2018.  https://splunkbase.splunk.com/app/4120/#/overview   Thanks
So far I think I have the syntax built out like this  index=tool OR index=tool2 OR index=tool3 | eval parta=(index=tool information, information | stats count) | eval partb=(index=tool2 informatio... See more...
So far I think I have the syntax built out like this  index=tool OR index=tool2 OR index=tool3 | eval parta=(index=tool information, information | stats count) | eval partb=(index=tool2 information, information | stats count) | eval partc=(index=tool3 information, | stats count) | table parta partb partc Thinking this will get me totals for the separate tools, but I'm looking to get just 1 total, per week if possible. I was thinking addtotals would help, but not sure.  Any and all help would be very appreciated. 
DEBUG [2021-06-30 09:23:50,172] org.apache.tomcat.jdbc.pool.ClassLoaderUtil: Attempting to load class[com.mysql.cj.jdbc.Driver] from sun.misc.Launcher$AppClassLoader@764c12b6 TRACE [2021-06-30 09:23... See more...
DEBUG [2021-06-30 09:23:50,172] org.apache.tomcat.jdbc.pool.ClassLoaderUtil: Attempting to load class[com.mysql.cj.jdbc.Driver] from sun.misc.Launcher$AppClassLoader@764c12b6 TRACE [2021-06-30 09:23:50,187] org.skife.jdbi.v2.DBI: Handle [org.skife.jdbi.v2.BasicHandle@3dc82e6a] obtained in 333 millis TRACE [2021-06-30 09:23:50,250] org.skife.jdbi.v2.DBI: statement:[/* ConfigurationStoreDao.get */ select property_key, property_value, property_schema, entity_id, entity_type, last_modified_at from configuration_store where entity_id = ? and entity_type = ? and property_key = ?] took 3 millis DEBUG [2021-06-30 09:23:50,250] javax.management.mbeanserver: ObjectName = metrics:name=com.appdynamics.platformadmin.db.ConfigurationStoreDao.get DEBUG [2021-06-30 09:23:50,250] javax.management.mbeanserver: name = metrics:name=com.appdynamics.platformadmin.db.ConfigurationStoreDao.get DEBUG [2021-06-30 09:23:50,250] javax.management.mbeanserver: Send create notification of object metrics:name=com.appdynamics.platformadmin.db.ConfigurationStoreDao.get DEBUG [2021-06-30 09:23:50,250] javax.management.mbeanserver: JMX.mbean.registered metrics:name=com.appdynamics.platformadmin.db.ConfigurationStoreDao.get TRACE [2021-06-30 09:23:50,250] org.skife.jdbi.v2.DBI: Handle [org.skife.jdbi.v2.BasicHandle@3dc82e6a] released ERROR [2021-06-30 09:23:50,250] com.appdynamics.platformadmin.core.service.EncryptionServiceSCSImpl: SCS initialization failed ! java.lang.NullPointerException: null ! at com.appdynamics.platformadmin.db.mappers.ConfigurationStoreMapper.map(ConfigurationStoreMapper.java:30) ! at com.appdynamics.platformadmin.db.mappers.ConfigurationStoreMapper.map(ConfigurationStoreMapper.java:25) ! at org.skife.jdbi.v2.RegisteredMapper.map(RegisteredMapper.java:35)
I have the following sample data returned that I'd like to extract 2 fields out of it: 1) The value after the "T "  and before the "EmployeeController.Post -" will be the first field <tsid>.   2) Bet... See more...
I have the following sample data returned that I'd like to extract 2 fields out of it: 1) The value after the "T "  and before the "EmployeeController.Post -" will be the first field <tsid>.   2) Between the "EmployeeController.Post - " and " - End" will be the second field <duration>.   06/30/21 09:39:21.39 p17872 [00226] T ELBJsZX6wk68nXrUKKEd4g EmployeeController.Post - 00:00:00:538 - End   Your help is highly appreciated.    
Hi, I have exposed a method in MBean in my java spring boot application. There are 10 nodes in my tier. Is there a way where I can invoke the mbean method on all the nodes without going to each and ... See more...
Hi, I have exposed a method in MBean in my java spring boot application. There are 10 nodes in my tier. Is there a way where I can invoke the mbean method on all the nodes without going to each and every node in the UI. If it is not possible through the UI, is there an api that I can invoke such that the controller would invoke the MBean on all the nodes.
Hi community, I have the need to store encrypted password used in a python script. I've created the app with its setup.xml page and the app is deployed on a search head cluster. The problem is tha... See more...
Hi community, I have the need to store encrypted password used in a python script. I've created the app with its setup.xml page and the app is deployed on a search head cluster. The problem is that under Manage Apps I cannot see "Set Up" button and if I click on "Launch App" I get 404 - not found.   This is my setup.xml file   <setup> <block title="New Crdential" endpoint="storage/passwords" entity="_new"> <input field="name"> <label>Username</label> <type>text</type> </input> <input field="password"> <label>Password</label> <type>password</type> </input> </block> </setup>     This is my app.conf file:   [install] is_configured = 0 [ui] is_visible = 1 label = my app [launcher] author = Marta Benedetti description = my app version = 1.0.0     Any idea?   Thanks a lot Marta