All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I wanted to graph the computed value of two fields and group the result by another field:   | mstats avg(kube.pod.cpu.limit) AS cpu_limit avg(kube.pod.cpu.usage_rate) AS cpu_usage WHERE index="metr... See more...
I wanted to graph the computed value of two fields and group the result by another field:   | mstats avg(kube.pod.cpu.limit) AS cpu_limit avg(kube.pod.cpu.usage_rate) AS cpu_usage WHERE index="metrics" span=auto BY "pod-name" | eval utilization=((cpu_usage/cpu_limit) * 100) | timechart values(utilization) agg=max limit=5 useother=false BY "pod-name" | fields - _span*   but I am not getting any result.  Here's the original search I used as starting point:   | mstats avg(_value) prestats=true WHERE metric_name="kube.container.cpu.usage" AND index="metrics" AND "pod-name"="router*" $mstats_span$ BY "pod-name" | timechart avg(_value) $timechart_span$ agg=max limit=5 useother=false BY "pod-name" | fields - _span*      
Greetings,   I try to add a time filter using time input in Splunk dashboard website, look like this: The default is "All time", and this is my search on the visualization: But when I set... See more...
Greetings,   I try to add a time filter using time input in Splunk dashboard website, look like this: The default is "All time", and this is my search on the visualization: But when I set the time picker to "Today" the visualization not updating or not refreshing the search for today's data. What I expect is whatever I set the time picker input, the search will run according to the time picker that I set it. How can I solve this?   Thanks in advance
I'm trying to send some busy logs through a Heavy Forwarder into our Splunk Cloud so we can do some aggregation to reduce the volume hitting the Cloud IDM. The feeds are picked up via Pub/Sub topics ... See more...
I'm trying to send some busy logs through a Heavy Forwarder into our Splunk Cloud so we can do some aggregation to reduce the volume hitting the Cloud IDM. The feeds are picked up via Pub/Sub topics in GCP and when they're transferred to the Heavy Forwarder, the volume of events coming through is only about 5% of the volume that was hitting the Cloud IDM. Nothing has been changed on the GCP side and the VM running the Heavy Forwarder is highly spec'd and is showing virtually no load on CPU, memory or networking. Any ideas as to what could be wrong? Thanks
Im setting up a new DB connect to pull data from MS SQL server 2016 database to splunk : 1. Downloaded the latest version of DB Connect 3.3.1 2. Downloaded sqljdbc_4.2  driver and moved the move th... See more...
Im setting up a new DB connect to pull data from MS SQL server 2016 database to splunk : 1. Downloaded the latest version of DB Connect 3.3.1 2. Downloaded sqljdbc_4.2  driver and moved the move the sqljdbc42.jar file to the $SPLUNK_HOME/etc/apps/splunk_app_db_connect/drivers directory  3. Created Identity 4. While creating a connection i get the the following error. [dw-760 - POST /api/connections/status] ERROR io.dropwizard.jersey.errors.LoggingExceptionMapper - Error handling a request: 5b6a559a6b1b46d9 java.lang.NullPointerException: null at com.splunk.dbx.connector.logger.AuditLogger.replace(AuditLogger.java:50) at com.splunk.dbx.connector.logger.AuditLogger.error(AuditLogger.java:44) at com.splunk.dbx.server.api.service.database.impl.DatabaseMetadataServiceImpl.getStatus(DatabaseMetadataServiceImpl.java:159). 5.I verified the identity has given a read permission for the DB im trying to query. Please let me know if this issue is encountered by someone. Thanks in Advance
Hi All,   When I am trying to run the following search in splunk:   |dbquery wmsqlprd "select REC_TYPE, CODE_TYPE, CODE_DESC, SHORT_DESC, USER_ID from SYS_CODE_TYPE" it is throwing an error A dat... See more...
Hi All,   When I am trying to run the following search in splunk:   |dbquery wmsqlprd "select REC_TYPE, CODE_TYPE, CODE_DESC, SHORT_DESC, USER_ID from SYS_CODE_TYPE" it is throwing an error A database error occurred: ORA-00942: table or view does not exist.     But when I try to run with the same saerch with wmsql.sys_code_type it is giving result.   |dbquery wmsqlprd "select REC_TYPE, CODE_TYPE, CODE_DESC, SHORT_DESC, USER_ID from wmsql.SYS_CODE_TYPE" .    what needs to be done to get results using the first query.   Regards, Rahul Gupta
My event will be as follows: #2020-01-01;12:00:00#2020-01-01;12:00:00#content on the event.  #2020-01-01;12:00:01#1970-01-01;00:00:00#content on the event.  I have configured my sourcetype to pick... See more...
My event will be as follows: #2020-01-01;12:00:00#2020-01-01;12:00:00#content on the event.  #2020-01-01;12:00:01#1970-01-01;00:00:00#content on the event.  I have configured my sourcetype to pick the time highlighted in orange as the time of the event during indexation; however, sometimes in our logs we have the time recorded with a date starting with 1970, Splunk doesn't recognize it because of max days ago limit. In such cases, is there a possibility to add a condition to fetch the date in Blue as the date of the event? P.S: I can change the _time of the event in search head but, I'm trying for a solution that will index the events directly with Blue time as the time of the event, when there is 1970 in orange time.
Hi, /opt/splunk/bin/splunk search " index=****  sourcetype="*****:proxylogs" earliest=-15m@m latest=now | fields action,bytes,bytes_in,bytes_out,src,category,date_hour,date_mday,date_minute,date_mon... See more...
Hi, /opt/splunk/bin/splunk search " index=****  sourcetype="*****:proxylogs" earliest=-15m@m latest=now | fields action,bytes,bytes_in,bytes_out,src,category,date_hour,date_mday,date_minute,date_month,date_second,date_wday,date_year,date_zone,url,site,domain,dest_ip,user,user_bunit,user_work_city,user_work_country,user_work_lat,user_work_long,_time | table action,bytes,bytes_in,bytes_out,src,category,date_hour,date_mday,date_minute,date_month,date_second,date_wday,date_year,date_zone,url,site,domain,dest_ip,user,user_bunit,user_work_city,user_work_country,user_work_lat,user_work_long,_time ')" Result : INFO: No matching fields exist.                 INFO: Your timerange was substituted based on your search string Above is the search for which no results are returned from CLI.  From GUI(Searchhead) I get results.    Could anyone please help.  Thanks
for example :  C:\user\process  -->  C:\\user\\process
Hi guys i am getting below error for an 8002 input streaming for checkpoint logging. Can you suggest how can i resolve this issue?   07-22-2020 15:51:27.889 +0200 ERROR TcpInputProc - Could not b... See more...
Hi guys i am getting below error for an 8002 input streaming for checkpoint logging. Can you suggest how can i resolve this issue?   07-22-2020 15:51:27.889 +0200 ERROR TcpInputProc - Could not bind to port IPv4 port 8002 07-22-2020 15:58:03.290 +0200 INFO TcpInputConfig - IPv4 port 8002 is reserved for raw input 07-22-2020 15:58:03.290 +0200 INFO TcpInputConfig - IPv4 port 8002 will negotiate s2s protocol level 3 07-22-2020 15:58:03.290 +0200 ERROR TcpInputProc - Could not bind to port IPv4 port 8002 07-23-2020 07:49:42.588 +0200 INFO TcpInputConfig - IPv4 port 8002 is reserved for raw input 07-23-2020 07:49:42.588 +0200 INFO TcpInputConfig - IPv4 port 8002 will negotiate s2s protocol level 3 07-23-2020 07:49:42.589 +0200 ERROR TcpInputProc - Could not bind to port IPv4 port 8002
hi The stats command below allows me to display data in a table panel I would like to display the fields header in an ordinate axis instead an abcissa axis could you help me please??   | stats l... See more...
hi The stats command below allows me to display data in a table panel I would like to display the fields header in an ordinate axis instead an abcissa axis could you help me please??   | stats last(Site) as "Geolocation site", last(Building) as "Geolocation building", last(DESCRIPTION_MODEL) as Model, last(OS) as OS by USERNAME    
Hello community and @jkat54 , I am currently testing your fancy webtools App. It looks very promising, but i am running in an error I don’t understand. Example: (Notice, the csv simple gets me th... See more...
Hello community and @jkat54 , I am currently testing your fancy webtools App. It looks very promising, but i am running in an error I don’t understand. Example: (Notice, the csv simple gets me the id - i could also doe eval team_id=„12“) index=test source=„NHL-Teams.csv“ Team=*Colorado* | eval team_id=ID | url_string= "https://statsapi.web.nhl.com/api/v1/teams/“.team_id | curl uri=url_string method=get debug=true | table curl* gets me an „curl uri schema not specified“ | curl uri="https://statsapi.web.nhl.com/api/v1/teams/12" method=get debug=true | table curl* is working as intended. I can only suggest that this kind of string concatenation for building a url is not supported, but I dont understand why Or do you suggest to do it in a different way? Kind regards! 
hi In the code below, I would like that if the condition "No patch in late" in my single panel  = true, the color background = green instead black I have tried with rangemap but i dont succeed Cou... See more...
hi In the code below, I would like that if the condition "No patch in late" in my single panel  = true, the color background = green instead black I have tried with rangemap but i dont succeed Could you help me please?     | inputlookup host.csv | lookup patchlevel.csv "Computer" as host | search host=$tok_filterhost$ | stats count by host flag_patch_version | where isnotnull(flag_patch_version) | rename flag_patch_version as "Current Patch level" | fields - count | eval month=strftime(now(), "%B") | rex field="Current Patch level" "^(?<versiontype>W\d+)P(?<version>\d+)" | eval version=tonumber(version) | eval joiner=versiontype.month | join type=left joiner [| inputlookup patch_in_late.csv | rex field=expectedversion "^(?<versiontype>W\d+)P(?<version>\d+)" | eval versionlate=tonumber(version) | eval joiner=versiontype.month | table joiner versionlate ] | eval patches_number_in_late=if((versionlate-version)>0, versionlate-version, "Up to date!") | appendpipe [| stats count as patches_number_in_late | where patches_number_in_late= 1 ] | eval patches_number_in_late=if(patches_number_in_late=1,"No patch in late",patches_number_in_late) | table patches_number_in_late      
I have a search:      search | eval difference=now() - strptime(createdDate,"%Y-%m-%d %H:%M:%S.%3N")      This works, except the createdDate field from my results are in GMT+0, whilst I'm in GM... See more...
I have a search:      search | eval difference=now() - strptime(createdDate,"%Y-%m-%d %H:%M:%S.%3N")      This works, except the createdDate field from my results are in GMT+0, whilst I'm in GMT+10 so there's 10 hours added to every result. I was going to just do a -36000 bandaid fix but after daylight savings this would break. How can I get current GMT time?
This is the data set from Fundamental 1. A lot of successful purchase events with same 'ProductName' doesn't include 'categoryId' as a field. So I want to put/replace the null 'categoryId' fie... See more...
This is the data set from Fundamental 1. A lot of successful purchase events with same 'ProductName' doesn't include 'categoryId' as a field. So I want to put/replace the null 'categoryId' field with 'categoryId' which have the same 'ProductName'. How do I do it and scale it up to multiple events? For example, I want the Null value in the screenshot above replaced by the categoryId of the same ProductName. I hope I word this out clearly. Thank you in advance.
Hi @gcusello  , While running the following search we are getting error as stated in topic. Search:   |dbquery wmsqlprd "select REC_TYPE, CODE_TYPE, CODE_DESC, SHORT_DESC, USER_ID from SYS_CODE_T... See more...
Hi @gcusello  , While running the following search we are getting error as stated in topic. Search:   |dbquery wmsqlprd "select REC_TYPE, CODE_TYPE, CODE_DESC, SHORT_DESC, USER_ID from SYS_CODE_TYPE"   Error: command="dbquery", Error getting database connection: ORA-01005: null password given; logon denied Please help to fix this. Regards, Rahul
Hi, Does anyone know how to change tcp input queue size? I know other queues such as indexQ, parsingQ, aggQ, are changeable at servers.conf, and in the case of output queue it is able to be changed ... See more...
Hi, Does anyone know how to change tcp input queue size? I know other queues such as indexQ, parsingQ, aggQ, are changeable at servers.conf, and in the case of output queue it is able to be changed at outputs.conf. But I didn't find out any information about tpc input queue setting.  I would appreciate if anyone let me know about it. Thanks in advance.  
Hello, i have a splunk query like this  index=someindex container_name=app ( cookie=*cookie1" OR cookie="cookie2" ) event=Someevent | timechart span=1m perc50(latency) This above query will crea... See more...
Hello, i have a splunk query like this  index=someindex container_name=app ( cookie=*cookie1" OR cookie="cookie2" ) event=Someevent | timechart span=1m perc50(latency) This above query will create one line chart. how can we create two charts one for cookie=cookie1 and the other for cookie=cookie2 in same panel ? Thanks in advance
I've configured a pair of Phantom servers to use warm standby. As per the documentation, I ran ibackup.pyc --setup after setting up warm standby, then ibackup.pyc --backup and it works fine. If I t... See more...
I've configured a pair of Phantom servers to use warm standby. As per the documentation, I ran ibackup.pyc --setup after setting up warm standby, then ibackup.pyc --backup and it works fine. If I try to run the --backup command on the warm standby I get errors:     [23/Jul/2020 01:40:00] ERROR: ERROR [057]: : recovery is in progress HINT: pg_walfile_name() cannot be executed during recovery.:     Presumably this is due to the way warm standby works, but the documentation is unclear. Is this expected behaviour, and is the only option to accept that the standby server backups will fail until a failover occurs i.e. the standby becomes the primary?
Hi All, We are trying to work out the best method for rolling our Indexer stack in AWS. We have recently migrated to Smartstore and 1 week after migration had a number of indexers autoheal which ... See more...
Hi All, We are trying to work out the best method for rolling our Indexer stack in AWS. We have recently migrated to Smartstore and 1 week after migration had a number of indexers autoheal which resulted in corrupted buckets all over the place. Our current thought is to put indexers into manual detention mode prior to the stack roll, but not sure if this forces a bucket roll and replication to S3 or if there is another step required to force this action.
The reason here being that the organization we're setting up Splunk ES for is in the process of centralizing 4 different Active Directories into a single centralized one (Azure AD). We're planning to... See more...
The reason here being that the organization we're setting up Splunk ES for is in the process of centralizing 4 different Active Directories into a single centralized one (Azure AD). We're planning to wrap up our implementation of Splunk ES in a month, while the central AD implementation will be done in December. Does anyone see any concerns or issues with bringing in only asset data for now, and then waiting until December to introduce the identity data? I understand that we'll just miss out on contexts associated with users performing actions that result in notable events, but wondering if there will be any actual concerns that we need to take into account.