All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi  I hope someone can help me .. I am completely new to Splunk. Although I love it so far I don't really know how to use it.  I want to filter for events only containing mongodbX and Couldn't get ... See more...
Hi  I hope someone can help me .. I am completely new to Splunk. Although I love it so far I don't really know how to use it.  I want to filter for events only containing mongodbX and Couldn't get a connection. The event can have mongodb1 or mongodb2 or mongodb3 as in example. And use the filtered events to build a graph. Example event:  {"time":"2020-07-24T11:48:21.18957143Z","event":"2020-07-24T11:48:21.189+0000 I REPL_HB [replexec-949] Error in heartbeat (requestId: 649360) to mongodb3:27017, response status: NetworkInterfaceExceededTimeLimit: Couldn't get a connection within the time limit\n","hostname":"node2"} Thank you 
hi I need tio match the host there is in host.csv with the field there is in test.csv but i dont succeed could you help me please?   [| inputlookup host.csv | table host ] | lookup test... See more...
hi I need tio match the host there is in host.csv with the field there is in test.csv but i dont succeed could you help me please?   [| inputlookup host.csv | table host ] | lookup test.csv HOSTNAME as host output SITE STATUS | stats values(SITE) as SITE, values(STATUS) as STATUS by host  
Hi at all, I know the meaning of file precedence in file configurations at index and search time, but what's the precedence in structural files as outputs.conf or deploymentclient.conf? My need is:... See more...
Hi at all, I know the meaning of file precedence in file configurations at index and search time, but what's the precedence in structural files as outputs.conf or deploymentclient.conf? My need is: I have to change the Deployment Server in an old installation, where, in Universal Forwarders, outputs.conf and deploymentclient.conf are in %SPLUNK_HOME/etc/system/local, and I'd like to insert them in a dedicated TA. So must I manually delete the old files in %SPLUNK_HOME/etc/system/local, or the new files override the other configurations because they have precedence? Reading the documentation I  think that the precedence is the one at Index time, so this means that files in %SPLUNK_HOME/etc/system/local have the precedence on the ones in TAs. I'll start a lab to check this, but did anyone encountered this problem? Ciao. Giuseppe
H Team,  Am trying to fetch the nicSwitch* details of only corresponding nicName from the below json data, which i could not able to achieve. Help is appreciated! raw json data to extract the netwo... See more...
H Team,  Am trying to fetch the nicSwitch* details of only corresponding nicName from the below json data, which i could not able to achieve. Help is appreciated! raw json data to extract the network switch details of each nicName   { "hostname": "abc", "inventory": "#####", "fqdn": "xxxxx.xxxx.xxx.xxx.xxx", "ip": "#.#.#.#", "platform": "XXXXX", "version": "XXXXX", "environment": "XXXX", "status": "XXXXX", "subStatus": "XXXXX", "contactSupporTeam": "xxxx", "model": "XXXXX", "product": "SERVER", "serial": "dfd34324", "app": [{ "appName": "XXXXX", "appAcronym": "XXX", "appStatus": "xxxxx", "appOwner": "xxxxxx" }], "pkg": [{ "pkgName": "xxxxx", "pkgVersion": "1.2.3" }, { "pkgName": "yyyyy", "pkgVersion": "2.3.4" }, { "pkgName": "zzzzz", "pkgVersion": "3.4.5" }], "nic": [{ "nicName": "eth4", "nicSwitch": [{ "nicSwitchName": "xxxxxxx", "nicSwitchSerial": "dfgdg45435fgg", "nicSwitchManufacturer": "XXXX", "nicSwitchModel": "XXX22", "nicSwitchVlan": "Vlan###", "nicSwitchChannel": "port-channel3", "nicSwitchPort": "Ethernet107/1/7" }, { "nicSwitchName": "xxxxxxxx", "nicSwitchSerial": "dfsf23432ef", "nicSwitchManufacturer": "XXXX", "nicSwitchModel": "XXXX", "nicSwitchChannel": "port-channel3", "nicSwitchPort": "Ethernet107/1/8", "nicSwitchVlan": "Vlan###" }], "nicDnsName": "", "nicType": null, "nicStatus": "up", "nicSpeed": "10000", "nicFirmware": "", "nicMac": "XX##XXX###XX", "nicDuplex": "FULL", "nicIP": "undefined", "nicNetmask": "" }, { "nicName": "eth5", "nicSwitch": [{ "nicSwitchName": "xxxxxx", "nicSwitchSerial": "dsfsdf3432sdf", "nicSwitchManufacturer": "XXXX", "nicSwitchModel": "XXXXX", "nicSwitchChannel": "port-channel3", "nicSwitchVlan": "Vlan###", "nicSwitchPort": "Ethernet107/1/8" }, { "nicSwitchName": "xxxxxx", "nicSwitchSerial": "fdf345345", "nicSwitchManufacturer": "XXXXX", "nicSwitchModel": "XXXXX", "nicSwitchChannel": "port-channel3", "nicSwitchPort": "Ethernet107/1/7", "nicSwitchVlan": "Vlan###" }], "nicDnsName": "", "nicType": null, "nicStatus": "up", "nicSpeed": "", "nicFirmware": "", "nicMac": "XXX###XXX", "nicDuplex": "", "nicIP": "undefined", "nicNetmask": "" }, { "nicName": "eth6", "nicSwitch": [], "nicDnsName": "", "nicType": null, "nicStatus": "", "nicSpeed": "", "nicFirmware": "", "nicMac": "", "nicDuplex": "", "nicIP": "#.#.#.#", "nicNetmask": "#.#.#.#" }] }
In below example I want only count of "a" as he has not paid till the end. And also the data entries are many which cannot be counted,below is only a small part of it.  Count should be based on cust... See more...
In below example I want only count of "a" as he has not paid till the end. And also the data entries are many which cannot be counted,below is only a small part of it.  Count should be based on customer, only those customers count should be given which have not paid till the end and if paid once its previous unpaid should not be consider.   Pending and paid invoices count gets change when invoices paid by customer E.g. 31st Jan 2020 customer has not done payment so I am making entry for that invoice as pending so this count will display on pending invoices as 1 and invoices paid as 0 and once Customer has paid on feb 1st week then from pending invoices count will change back to 0 and paid invoices to 1 date customer payment_status 01/31/2020 a unpaid 01/31/2020 b unpaid 01/31/2020 c paid 02/31/2020 a unpaid 02/06/2020 b paid 02/26/2020 c paid 03/30/2020 a unpaid 03/30/2020 b paid 03/30/2020 c paid Any help is appreciated.
Hi All, We have a dashboard which uses three layers of tabs- (please refer attached screenshot) Issue- when we load the dashboard, by default it opens first tab of third layer, Ideally it sho... See more...
Hi All, We have a dashboard which uses three layers of tabs- (please refer attached screenshot) Issue- when we load the dashboard, by default it opens first tab of third layer, Ideally it should be first tab of first layer.  Splunk Version - 7.3.3 Note- we are using latest tabs.js and tabs.css file.
HI I use the code below and I would like that if the host I fill in my drilldown doenst exists J have the message "No Battery results" in my single panel Could you help me please??     | inputlo... See more...
HI I use the code below and I would like that if the host I fill in my drilldown doenst exists J have the message "No Battery results" in my single panel Could you help me please??     | inputlookup tablet_host.csv | lookup Pan.csv "Hostname00" as host OUTPUT HealthState00 | search host=$tok_filterhost$ | stats values(HealthState00) as HS     I 
Dear All,   can you please help me. Tried to solve the below question, but up to this moment it seems I could not find the precise solution.   3 machines in the Splunk system. Machine 1: Window... See more...
Dear All,   can you please help me. Tried to solve the below question, but up to this moment it seems I could not find the precise solution.   3 machines in the Splunk system. Machine 1: Windows server, Search Head and Indexer server Machine 2: Windows Domain Controller with deployed heavy forwarder on it (only local Windows log collection). Machine 3: syslog server   The goal to send the local logs from heavy forwarder (Machine 2) to: -syslog server (all the local logs - Windows Security logs as well - and the line breaks should be changed to “;”) -Search Head Server (only the Windows Security logs without any modification) Modified the .conf files in the C:\Program Files\Splunk\etc\system\ on heavy forwarder server. Of course I can see the Windows logs on the heavy forwarder, and some logs are sent to syslog / Search Head server, but not what I want…   outputs.conf ... [tcpout:indexer_group] server=<searchheadserveripaddress>:9997 [syslog:syslog_group] server=<syslogserveripaddress>:514 type=tcp ... transforms.conf ... [send_to_syslog] REGEX=. DEST_KEY=_SYSLOG_ROUTING FORMAT=syslog_group [send_to_indexer] REGEX=. DEST_KEY=_TCP_ROUTING FORMAT=indexer_group ...   props.conf ... [source::WinEventLog:Security] TRANSFORMS-routing=send_to_indexer, send_to_syslog priority=5 [host::*] TRANSFORMS-routing=send_to_syslog SEDCMD=s/[\n\r]/;/g priority=10 ...   Thank you.
Hi I use the code below In the case of no FreeSpace event exists, I would like to display the message "No disk pace events for this host" in my single panel How doing this please?         `d... See more...
Hi I use the code below In the case of no FreeSpace event exists, I would like to display the message "No disk pace events for this host" in my single panel How doing this please?         `diskspace` | fields FreeSpaceKB host | eval host=upper(host) | eval FreeSpace = FreeSpaceKB/1024 | eval FreeSpace = round(FreeSpace/1024,1) | search host=$tok_filterhost$ | stats latest(FreeSpace) as FreeSpace by host | table FreeSpace      
I am trying to use Splunk logging library  to log events to HTTP Event Collector via java.util.logging. Followed steps as mentioned in: https://dev.splunk.com/enterprise/docs/java/logging-java/howto... See more...
I am trying to use Splunk logging library  to log events to HTTP Event Collector via java.util.logging. Followed steps as mentioned in: https://dev.splunk.com/enterprise/docs/java/logging-java/howtouseloggingjava/enableloghttpjava   Verified the HTPP event collector works fine with below snippet of code from emr cluster and also curl command works fine.   RequestBody formBody = new FormBody.Builder() .add("username", "abc") .build(); Request request = new Request.Builder() .url("http://host:8088/services/collector") .addHeader("Authorization", "Splunk token") .post(RequestBody.create(MediaType.parse("application/json; profile=urn:splunk:event:1.0; charset=utf-8"),"{\"event\": \"Thursday, world!\", \"sourcetype\": \"manual\"}")) .build();   However, I cant get it working through Splunk logging in java.  Java code:   String jsonMsg = "{\"event\": \"Thursday, world!\", \"sourcetype\": \"manual\"}"; Logger logger = java.util.logging.Logger.getLogger("splunkLogger"); logger.info(jsonMsg);   splunk-http-input.properties # Implicitly create a logger called 'splunkLogger', set its level to INFO, and # make it log using the SocketHandler. splunkLogger.level = INFO handlers = com.splunk.logging.HttpEventCollectorLoggingHandler   # Configure the com.splunk.logging.HttpEventCollectorHandler com.splunk.logging.HttpEventCollectorLoggingHandler.url = http://host:8088 com.splunk.logging.HttpEventCollectorLoggingHandler.level = INFO com.splunk.logging.HttpEventCollectorLoggingHandler.token = token com.splunk.logging.HttpEventCollectorLoggingHandler.batch_size_count = 1 # com.splunk.logging.HttpEventCollectorLoggingHandler.middleware = HttpEventCollectorUnitTestMiddleware # com.splunk.logging.HttpEventCollectorLoggingHandler.index=default   com.splunk.logging.HttpEventCollectorLoggingHandler.disableCertificateValidation=true   # You would usually use XMLFormatter or SimpleFormatter for this property, but # SimpleFormatter doesn't accept a format string under Java 6, and so we cannot # control its output. Thus we use a trivial formatter as part of the test suite # to make it easy to deal with. #com.splunk.logging.HttpEventCollectorHandler.Formatter = TestFormatter   Invoking it with command: java -Djava.util.logging.config.file=/home/ec2-user/splunk-http-input.properties -cp java-project-1.0-SNAPSHOT.jar com.mkyong.hashing.SendEvents Can someone tell me what I am missing here.    
Hi Everyone, Facing some issues with line breaking while ingeting aws cloudwatch logs using Splunk_TA_aws app. Any suggestions please
How do I convert a timestamp from any timezone to UTC in splunk?  I have a field "DeviceTime" that can hold any time zone value. Few examples below     7/24/2020 9:45:47 AM +05:30 7/23/2020 6:29:... See more...
How do I convert a timestamp from any timezone to UTC in splunk?  I have a field "DeviceTime" that can hold any time zone value. Few examples below     7/24/2020 9:45:47 AM +05:30 7/23/2020 6:29:45 AM -05:00 7/24/2020 11:21:31 AM +07:00 7/24/2020 4:21:29 AM +00:00     I would like to find the difference in minutes between current UTC time and the time stamp fields above
I've stuck in a scenario, where I want to extract complete JSON object from an JSON array collection on behalf of my search input criteria or on the basis of id match condition. Below is an example :... See more...
I've stuck in a scenario, where I want to extract complete JSON object from an JSON array collection on behalf of my search input criteria or on the basis of id match condition. Below is an example :-  { "message": { messageHeader: "MessageHeader", "messageList": [{ "messageName": "messageNameA", "messageValue": "messageValueA", "messageId": "A_Value" "messageStart": "StartDate_Time_Value1", "messageEnd": "EndDate_Time_Value_1" "messageConsumerCount": "Count_MessageA" }, { "messageName": "messageNameB", "messageValue": "messageValueB", "messageId": "B_Value" "messageStart": "StartDate_Time_Value1", "messageEnd": "EndDate_Time_Value_1" "messageConsumerCount": "Count_MessageB" }, { "messageName": "messageNameC", "messageValue": "messageValueC", "messageId": "C_Value" "messageStart": "StartDate_Time_Value1", "messageEnd": "EndDate_Time_Value_1" "messageConsumerCount": "Count_MessageC" } ], "messageTotalConsumerCount": "Total Value of Header 1" }, "severity": "info" }, { "message": { messageHeader: "MessageHeader", "messageList": [{ "messageName": "messageNameA", "messageValue": "messageValueA", "messageId": "A_Value" "messageStart": "StartDate_Time_Value2", "messageEnd": "EndDate_Time_Value_2" "messageConsumerCount": "Count_MessageA" }, { "messageName": "messageNameC", "messageValue": "messageValueC", "messageId": "C_Value" "messageStart": "StartDate_Time_Value2", "messageEnd": "EndDate_Time_Value_2" "messageConsumerCount": "Count_MessageC" }, { "messageName": "messageNameB", "messageValue": "messageValueB", "messageId": "B_Value" "messageStart": "StartDate_Time_Value2", "messageEnd": "EndDate_Time_Value_2" "messageConsumerCount": "Count_MessageB" }, { "messageName": "messageNameD", "messageValue": "messageValueD", "messageId": "D_Value" "messageStart": "StartDate_Time_Value2", "messageEnd": "EndDate_Time_Value_2" "messageConsumerCount": "Count_MessageD" } ], "messageTotalConsumerCount": "Total Value of Header 1" }, "severity": "info" }   In the above JSON, I want to retrieve JSON object on the basis of "messageId" = "B_Value". So my desire result should be : { "messageName": "messageNameB", "messageValue": "messageValueB", "messageId": "B_Value" "messageStart": "StartDate_Time_Value1", "messageEnd": "EndDate_Time_Value_1" "messageConsumerCount": "Count_MessageB" }, { "messageName": "messageNameB", "messageValue": "messageValueB", "messageId": "B_Value" "messageStart": "StartDate_Time_Value2", "messageEnd": "EndDate_Time_Value_2" "messageConsumerCount": "Count_MessageB" } The sequence of messageId can be different, as in the JSON "B_Value" occurrence is second and third respectively. Let me know if I need to clarify more.  Thanks in Advance!!!      
Hi Guys,  Would you know why does the Selected fields are missing after i enable this specific Calculated fields ? And when i delete that calculated field the the default selected fields are showin... See more...
Hi Guys,  Would you know why does the Selected fields are missing after i enable this specific Calculated fields ? And when i delete that calculated field the the default selected fields are showing (host, source, sourcetype).  The fields are searchable but not showing on the Selected or interesting fields    Enabled :    I only have these on my app: -default.meta- default access = read : [ * ], write : [ admin ] -app.conf - just the name of the app and the state = enabled. -props.conf  
I noticed several Splunkbase that can facilitate the backup and restore process, such as the following. Git Version Control for Splunk - https://splunkbase.splunk.com/app/4182/#/overview Versio... See more...
I noticed several Splunkbase that can facilitate the backup and restore process, such as the following. Git Version Control for Splunk - https://splunkbase.splunk.com/app/4182/#/overview Version Control for Splunk - https://splunkbase.splunk.com/app/4355/#/overview Stateful Snapshot for Splunk - https://splunkbase.splunk.com/app/4122/ Has anyone used these and have recommendations for ideal ones for backing up and restoring conf files and knowledge objects across Splunk components (search heads, deployment server, cluster master) and installed apps/add-ons?
I checked the forums as my 60 day enterprise licence expired and I like to convert to the free perpetual licence. But splunk won't let me log on saying it expired and I forgot my administration passw... See more...
I checked the forums as my 60 day enterprise licence expired and I like to convert to the free perpetual licence. But splunk won't let me log on saying it expired and I forgot my administration password. How do I fix this?   thanks
Hi. I already have a Splunk query that we use in a production environment. We are now adding a new field that we'd like to filter on. However, we want to remain backwards compatible with the query so... See more...
Hi. I already have a Splunk query that we use in a production environment. We are now adding a new field that we'd like to filter on. However, we want to remain backwards compatible with the query so we can still view the data before adding this new field. Here's sort of what I'd like: Current: index=prod sourcetype="prod" year="2019" jobId="21766782-c79d-40c3-a9bf-a3b7269ef557"   With New Field: index=prod sourcetype="prod" year="2019" jobId="21766782-c79d-40c3-a9bf-a3b7269ef557" if(exists(type), type="MY_TYPE", "")   I know this isn't the right syntax, but essentially I want to filter on that field if it exists in the data. If it doesn't, I want it to exclude it (basically use the old query).
Hi, I'm trying send cloud watch alerts to Splunk using lambda and I created lambda function using blueprint "splunk-cloudwatch-logs-processor" in AWS but function throwing me an error:     START ... See more...
Hi, I'm trying send cloud watch alerts to Splunk using lambda and I created lambda function using blueprint "splunk-cloudwatch-logs-processor" in AWS but function throwing me an error:     START RequestId: 61c99093-e39e-4e86-90b3-1b97415aaa2c Version: $LATEST 2020-07-23T23:29:33.236Z 61c99093-e39e-4e86-90b3-1b97415aaa2c INFO Received event: { "key1": "value1", "key2": "value2", "key3": "value3" } 2020-07-23T23:29:33.236Z 61c99093-e39e-4e86-90b3-1b97415aaa2c ERROR Invoke Error {"errorType":"TypeError","errorMessage":"Cannot read property 'data' of undefined","stack":["TypeError: Cannot read property 'data' of undefined"," at Runtime.exports.handler (/var/task/index.js:31:47)"," at Runtime.handleOnce (/var/runtime/Runtime.js:66:25)"]} END RequestId: 61c99093-e39e-4e86-90b3-1b97415aaa2c REPORT RequestId: 61c99093-e39e-4e86-90b3-1b97415aaa2c Duration: 4.30 ms Billed Duration: 100 ms Memory Size: 512 MB Max Memory Used: 66 MB     Any suggestions? Thanks.
I can find a way to get results from a StreamingCommand to retain leading spaces. It doesn't matter if the field was fine before the app, it is gone afterward. Or even if the field wasn't processed b... See more...
I can find a way to get results from a StreamingCommand to retain leading spaces. It doesn't matter if the field was fine before the app, it is gone afterward. Or even if the field wasn't processed by the command.   | makeresults | eval test1 = " " | eval test2 = urldecode("%20%20") | eval ok_here = if(test1==test2 AND len(test1) == 2, "true", "false") | eval value_for_command = "nothing_special" | customstreamingcommand field=value_for_command | eval still_ok_here = if(test1==test2 AND len(test1) == 2, "true", "false") In this simple case, "still_ok_here"  is false, but the same test was true before the command.    
Hi team, I want to divide the output result of one query with output of second query and get a remainder. I am using the following query but unable to get any results for this index="wcnp_search-f... See more...
Hi team, I want to divide the output result of one query with output of second query and get a remainder. I am using the following query but unable to get any results for this index="wcnp_search-frontend" kubernetes.container_name=search-electrode-app "log.event"=ATC_CLICK | stats count by log.event |rename log.event as total_atc_events | append [ search index="wcnp_search-frontend" kubernetes.container_name=search-electrode-app "log.msg"="ATC click failure" | stats count by log.msg | rename log.msg as atc_failures ] | eval error = max(total_atc_events) / max(atc_failures) | stats count by error Can anyone please assist ?