All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi! Could you please help me with that special case of search? This is my data: User App 1. user1 appA 2. user1 appB 3. user2 appB 4. user1 appA If I would like to get the hits per user an... See more...
Hi! Could you please help me with that special case of search? This is my data: User App 1. user1 appA 2. user1 appB 3. user2 appB 4. user1 appA If I would like to get the hits per user and app by hour, i use the following | timechart span=1h count by app and now my question: I would like to have the events from the last 7 days and for each app i would like the max count(per Hour) for each day I have tried it with a second timechart after the first one and a span=1 but without success. Thank you for your help! Robert
I have an use case to calculate time difference between events grouped together by transaction command. Example is given below. { "timeStamp": "Fri 2020.03.27 01:10:34:1034 AM EDT", "s... See more...
I have an use case to calculate time difference between events grouped together by transaction command. Example is given below. { "timeStamp": "Fri 2020.03.27 01:10:34:1034 AM EDT", "step": "A" } { "timeStamp": "Fri 2020.03.27 01:10:38:1038 AM EDT", "step": "B", } { "timeStamp": "Fri 2020.03.27 01:10:39:1039 AM EDT", "step": "C" } { "timeStamp": "Fri 2020.03.27 01:10:40:1034 AM EDT", "step": "D" } I have two requirements. Will it be possible to get time difference between consecutive steps ? STEP B 4 sec STEP C 1 sec STEP D 1 sec If above is possible how can I get average elapsed time between two steps for all the transactions which have Step A, B, C, D ?
why we are using heatmap what are the usages any apps and add-ons to this any example queries
All, Trying to make a basic scripted python lookup. The examples and tutorials were just way over my head. So trying to do something simpler. I coped the example file and tried to simplify the prob... See more...
All, Trying to make a basic scripted python lookup. The examples and tutorials were just way over my head. So trying to do something simpler. I coped the example file and tried to simplify the problem a little. What I am aiming to do here is pass a field called 'mystring' and get back a field called 'myoutput'. I am passing hello as the field value for mystring and expecting world as the value in the new field myoutput. When I execute this I get the following index=* | head 1 | eval mystring = "hello" | lookup mylookup mystring I get" Script execution failed for external search command '/opt/splunk/etc/apps/TA-myapp/bin/mylookup.py'." here is my python. #!/usr/bin/env python import csv import sys def main(): if len(sys.argv) != 3: print("Usage: python mylookup.py [mystring] [myoutput]") sys.exit(1) # always passing hello as a eval mystring = "hello" mystring = sys.argv[1] myoutput = sys.argv[2] infile = sys.stdin outfile = sys.stdout r = csv.DictReader(infile) header = r.fieldnames w = csv.DictWriter(outfile, fieldnames=r.fieldnames) w.writeheader() for result in r: result[mystring] = "hello" result[myoutput] = "world" w.writerow(result) main()
HellO I have splunk Enterprise Version:8.0.1 on my pc .I am trying to add Cisco NAE Network Assurance Engine host to splunk. I am getting error every time i try to add NAE host. I am attaching ... See more...
HellO I have splunk Enterprise Version:8.0.1 on my pc .I am trying to add Cisco NAE Network Assurance Engine host to splunk. I am getting error every time i try to add NAE host. I am attaching screen shot. Please take a look. I have tried deleting files under Local folder situated under C:\Program Files\Splunk\etc\apps\TA_cisco-candid\local then restart splunk that doesn't help either as work around. Please take a look of actual error. Encountered the following error while trying to update: Error while posting to url=/servicesNS/nobody/TA_cisco-candid/admin/cisco_nae_server_setup/cisco_nae_server_setup_settings I would appreciate your help. Thanks,
Hi Splunkers, I have a use case to deploy, please refer the image attached. On clicking "choose file" it should browse from the local machine. On clicking upload it shall store the uploaded fil... See more...
Hi Splunkers, I have a use case to deploy, please refer the image attached. On clicking "choose file" it should browse from the local machine. On clicking upload it shall store the uploaded file on lookups directory of specific app. On Submit button we have a SPL query that will merge the lookup, also in merging the lookup, i have to use the name of the uploaded file, which has to be dynamic
Logs for splunk do not flow after the following error. ERROR KVStoreIntrospection failed to get introspection data ERROR KVStoreBulletinBoardManager Failed to start KV Store process. See m... See more...
Logs for splunk do not flow after the following error. ERROR KVStoreIntrospection failed to get introspection data ERROR KVStoreBulletinBoardManager Failed to start KV Store process. See mongod.log and splunkd.log for details. ERROR KVStoreConfigurationProvider Could not start mongo instance. Initialization failed. ERROR KVStoreConfigurationProvider Could not get ping from mongod. ERROR KVStoreBulletinBoardManager KV Store changed status to failed. KVStore process terminated. ERROR KVStoreBulletinBoardManager KV Store process terminated abnormally (exit code 14, status exited with code 14). See mongod.log and splunkd.log for details.
Hi bro, I have a problem with display next version to compare with current version selected The code bellow is work, but when i selected the lastest version i can not handle null value in VERSION ... See more...
Hi bro, I have a problem with display next version to compare with current version selected The code bellow is work, but when i selected the lastest version i can not handle null value in VERSION I want VERSION will stats count from version_0 (All value excepted current value select) when i select lasted version How can i display it, appricate thanks for any help ! index=abcd MODEL IN ($model$) BUILDTYPE=$buildtype$ source="source1" | search VERSION> $version$ | stats count by VERSION i try the bellow code but it didnt work due to null value in temp will replaced index=abcd MODEL IN ($model$) BUILDTYPE=$buildtype$ source="source1" | eval version = VERSION | eval temp = if(version > $version$, VERSION, null) | eval temp1 = if(isnull(temp), mvindex(VERSION,1) , temp) | stats count by temp1
I would like to display "Zero" when 'stats count' value is '0' index="myindex" "client.ipAddress" IN ( 10.12.12.13,10.12.12.14 ) | stats count
Hello All, I have a data like this X1=[A(status=X, reason=Y), A(status=Z, reason=Y), A(status=xyz, reason=abc)] Now when I am using the query <search criteria> | table status, reason it ... See more...
Hello All, I have a data like this X1=[A(status=X, reason=Y), A(status=Z, reason=Y), A(status=xyz, reason=abc)] Now when I am using the query <search criteria> | table status, reason it is giving values "X" and "Y" 1. Trying to understand why it is not considering the values Z & Y and xyz & abc 2. If I have to get the result of values Z & Y and xyz & abc how to retrieve?
Hi Experts and Splunkers, We have an existing Splunk environment which consists of: - 3 x clustered Search Heads - 3 x clustered Indexers - 1 x heavy forwarder which has several add-ons (lik... See more...
Hi Experts and Splunkers, We have an existing Splunk environment which consists of: - 3 x clustered Search Heads - 3 x clustered Indexers - 1 x heavy forwarder which has several add-ons (like DB conn, AWS Add-on) and also exposes HEC endpoint - Other servers for other functions (like deployer, cluster master, license master etc) We have been asked by our client to implement a redundancy also in the heavy forwarder as now it is a single point of failure. More specifically, we would like to have 2 HF servers for high availability purpose - ideally Active-Active like IDX and SH. Through our our research and reading through Splunk docs and answers, we understand we can set-up multiple HF servers without having to worry about data duplication for the inbound data (such as inbound data from UF with autoLB, inbound data via HEC with loadbalancer). How can we manage the data the add-ons in the HF servers are pulling from the source system, such as DB connect and AWS-addon? We feel we will end up having a duplicated data if we set-up 2 HF servers (active-active) on which we install a same set of add-ons? Thanks for your input in advance!
Hello, Getting this error message everytime when I'm trying to add the search peer to master. Failed to register with cluster master reason: failed method=POST path=/services/cluster/master/pee... See more...
Hello, Getting this error message everytime when I'm trying to add the search peer to master. Failed to register with cluster master reason: failed method=POST path=/services/cluster/master/peers/?output_mode=json master=113.134.117.122:8089 rv=0 gotConnectionError=0 gotUnexpectedStatusCode=1 actual_response_code=500 expected_response_code=2xx status_line="Internal Server Error" socket_error="No error" remote_error=Cannot add peer=18.218.129.11 mgmtport=9089 (reason: http client error=Connect Timeout, while trying to reach https://18.218.129.11:9089/services/cluster/config). [ event=addPeer status=retrying AddPeerRequest: { _id= _indexVec=''active_bundle_id=3BA0B5A63B5F98681601E92106214CBE add_type=Initial-Add base_generation_id=0 batch_serialno=1 batch_size=2 last_complete_generation_id=0 latest_bundle_id=3BA0B5A63B5F98681601E92106214CBE mgmt_port=9089 name=EE4D1ED7-70B3-4AAC-A5E6-0A920EEE2FFC register_forwarder_address= register_replication_address= register_search_address= replication_port=8080 replication_use_ssl=0 replications= server_name=ip-12-31-4-60.us-east-2.compute.internal site=default splunk_version=8.0.2.1 splunkd_build_number=f002026bad55 status=Up } ]. 3/27/2020, 2:15:20 AM Can someone help me with this? I tried, remove the old configuration in server.conf and restarted many ties but no use.
I have events with GMT time .I want to convert to EST. Wed, 25 Mar 2020 21:43:31 GMT title="Webex Meetings: Users connecting to Webex Meetings may experience latency or failures joining computer a... See more...
I have events with GMT time .I want to convert to EST. Wed, 25 Mar 2020 21:43:31 GMT title="Webex Meetings: Users connecting to Webex Meetings may experience latency or failures joining computer audio" Thanks in Advance
hello all, I try to start enterprise console after finishing installing it in Ubuntu with https://host-virtual-machine:9191 and dosn't strat (Unable to connect) i tried navigate to /bin and start... See more...
hello all, I try to start enterprise console after finishing installing it in Ubuntu with https://host-virtual-machine:9191 and dosn't strat (Unable to connect) i tried navigate to /bin and start-platform-admin through terminal and "i get command not found" .
I've noticed that some of the dashboard panels are using EventCode=<Event ID number> while others are using EventDescription="<Event description>". For consistency, I would like to propose that all ... See more...
I've noticed that some of the dashboard panels are using EventCode=<Event ID number> while others are using EventDescription="<Event description>". For consistency, I would like to propose that all be changed to use: (EventCode=<Event ID number> OR EventDescription="<Event Description>")
I am trying to create a dashboard with a search that shows the top 10 entries but I also need to be able to export all the results
index=environment sourcetype=infinity_thermostat < shows all the extracted fields and values under "Interesting Fields"> When I click and interesting field, see it's values and select a value (whi... See more...
index=environment sourcetype=infinity_thermostat < shows all the extracted fields and values under "Interesting Fields"> When I click and interesting field, see it's values and select a value (which adds it to the search), zero results are returned. Is this a bug in recent versions? I've seen other "similar" posts and some talk about workarounds such as fields.conf, but this is pretty straight forward and the search time extractions are working, just not searchable when used in the search. cooling=idle is the example I'm using which returns zero, cooling=idle* (zero results), cooling=idl* (zero results), cooling=id* (results), cooling=i* (results), cooling=* (all results), cooling=*idle (results) Thank you for any thoughts/help Screenshots attached showing the issue.
As invoking a script form alert action is depredcated tried using alert_actons.conf but not working. Attached the conf below. Please help on my mistake and make it to work. Script name is test as sta... See more...
As invoking a script form alert action is depredcated tried using alert_actons.conf but not working. Attached the conf below. Please help on my mistake and make it to work. Script name is test as stanza in alert_acion.conf and it is in bin alert_action.conf [test] is_custom = 1 label = Custom Alert Action description = Triggers a custom alert action icon_path = appIcon.png alert.execute.cmd = /Data/splunk/etc/apps/0_script_test/bin/test.sh
Hi As run a script invoked from alert action is deprecated I tried to custom alert action to a script bit it is not working. Below os the conf. test is the stanza name and test.sh is the script name ... See more...
Hi As run a script invoked from alert action is deprecated I tried to custom alert action to a script bit it is not working. Below os the conf. test is the stanza name and test.sh is the script name which I kept in bin folder. Please help on this. alert_action.conf [test] is_custom = 1 label = Custom Alert Action description = Triggers a custom alert action icon_path = appIcon.png alert.execute.cmd = /Data/splunk/etc/apps/0_script_test/bin/test.sh disabled=0
Hello , I have installed the syndication input app from splunk base https://splunkbase.splunk.com/app/2646/ .I gave the URL https://status.webex.com/history.rss and gave custom sourcetype and cre... See more...
Hello , I have installed the syndication input app from splunk base https://splunkbase.splunk.com/app/2646/ .I gave the URL https://status.webex.com/history.rss and gave custom sourcetype and created new index=webex .But I see data going to index main and having source=/opt/splunk/var/spool/splunk/syndication___RSS_WEBEX_1585256859.14_12157.stash_syndication_input sourcetype=stash_syndication_input-too_small and does not recognize timestamp .I want to use the custom sourcetype and index so that the event breaks correctly and it recognizes the timestamp.How to create an input and which file to edit. Thanks in Advance