All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi everybody, I have a DBinput that work normaly, valid connection, query that show data, etc. But the data very often, once or twice a day stop indexing data and I have to manually enable then d... See more...
Hi everybody, I have a DBinput that work normaly, valid connection, query that show data, etc. But the data very often, once or twice a day stop indexing data and I have to manually enable then disable it to make it work normal again. When I use     index=_internal source=*dbx2* mi_input://<my source>     Then the result was the job just dead stop at [action=start_executing_dbinput], no other action like check rising column, or action complete. Since it don't show anything other than stopping at action=start, no bug or error code, I don't know how to deal with it other than manually restart the job. Can anyone help me with this?
Block: 2022-02-14 02:30:00,046 [Worker-3] DEBUG User job started 2022-02-14 02:30:00,063 [Worker-3] DEBUG Calling importData 2022-02-14 02:30:00,063 [Worker-3] DEBUG Initializing External DB conn... See more...
Block: 2022-02-14 02:30:00,046 [Worker-3] DEBUG User job started 2022-02-14 02:30:00,063 [Worker-3] DEBUG Calling importData 2022-02-14 02:30:00,063 [Worker-3] DEBUG Initializing External DB connection 2022-02-14 02:30:00,063 [Worker-3] ERROR Exception occured 2022-02-14 02:30:00,067 [Worker-3] DEBUG url before binding 2022-02-14 02:30:00,560 [Worker-3] DEBUG inside finally... 2022-02-14 02:30:00,567 [Worker-3] DEBUG sending Notification Email 2022-02-14 02:30:00,567 [Worker-3] DEBUG User job ended 2022-02-14 02:30:00,046 [Worker-3] DEBUG User job started 2022-02-14 02:30:00,063 [Worker-3] DEBUG Calling importData 2022-02-14 02:30:00,063 [Worker-3] DEBUG Initializing External DB connection 2022-02-14 02:30:00,067 [Worker-3] DEBUG url before binding 2022-02-14 02:30:00,560 [Worker-3] DEBUG inside finally... 2022-02-14 02:30:00,567 [Worker-3] DEBUG sending Notification Email 2022-02-14 02:30:00,567 [Worker-3] DEBUG User job ended Expected output: 2022-02-14 02:30:00,063 [Worker-3] ERROR Exception occured 2022-02-14 02:30:00,067 [Worker-3] DEBUG url before binding 2022-02-14 02:30:00,560 [Worker-3] DEBUG inside finally... 2022-02-14 02:30:00,567 [Worker-3] DEBUG sending Notification Email Thanks in advance
hello I use the search below in order to calculate a percentage But I need to add + if s > s2 and - if s2 < s How to do this please?   `index` sourcetype="session" | bin _time span=15m | ... See more...
hello I use the search below in order to calculate a percentage But I need to add + if s > s2 and - if s2 < s How to do this please?   `index` sourcetype="session" | bin _time span=15m | eval time=strftime(_time,"%H:%M") | stats dc(s) as s by time | table s | appendcols [ search `index` sourcetype="session" earliest=-7d@d+7h latest=-7d@d+19h | bin _time span=15m | eval time=strftime(_time,"%H:%M") | stats dc(s) as s2 by time | table s2] | eval perc=round((s/s2)*100,1). "%" | table perc    
Hi Everyone, I want to override EVAL statement exist in Splunkbase TA but don't want to modify in splunkbase TA. So I create custom TA and put same EVAL statement+extra category which I want to ext... See more...
Hi Everyone, I want to override EVAL statement exist in Splunkbase TA but don't want to modify in splunkbase TA. So I create custom TA and put same EVAL statement+extra category which I want to extract but it is not working. Can anybody please help me how I can do that. Splunkbase TA config /opt/splunk/etc/apps/TA-microsoft/default/props.conf             EVAL-internal_message_id = case(category IN ("Events1", "Events2"),'properties.MessageId') Custom TA config /opt/splunk/etc/apps/A-csc_cyber_genric_sh_Splunk_TA/default/props.conf             EVAL-internal_message_id = case(category IN ("Events1","Events2","Events3"),'properties.MessageId') Thanks in Advance  
I'm fetching some data from API via a python script and passing it to Splunk. it's is not paring the JSON format. I've tested my output with JSON parser with no error. If I set the source type to som... See more...
I'm fetching some data from API via a python script and passing it to Splunk. it's is not paring the JSON format. I've tested my output with JSON parser with no error. If I set the source type to some custom I'm receiving events as a text. but when I'm putting source type as _json it is giving line breaking error expected : \  Below is the python script. I'm using json.dumps also while printing. Now I'm writing to the file and fetching with monitor.      # This sript is fetching data from virustotal api and passing to splunk. # checkpointing is enabled to drop duplicate events import json,requests,sys,time,os from datetime import datetime proxies = { 'https': 'http://security-proxy.emea.svc.corpintra.net:3128' } url = "https://www.virustotal.com/api/v3/intelligence/hunting_notifications" params = { 'limit' : 40, 'count_limit' : 10000 } headers = { "Accept": "application/json", "x-apikey": "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx", } current_time = datetime.now() file_path = f'/opt/splunk/etc/apps/infy_ta_virustotal_livehunt_validation/bin/data/' complete_name = file_path + f'livehunt_{time.strftime("%Y_%m_%d_%H_%M_%S")}' keys_filename = f'/opt/splunk/etc/apps/infy_ta_virustotal_livehunt_validation/bin/keys.txt' def write_new_keys_in_file(keys_filename, keys_to_be_indexed): try: with open(keys_filename, 'w') as file: for key in keys_to_be_indexed: file.write(str(key)) file.write('\n') except Exception as e: print(e) def get_indexed_key(keys_filename): try: with open(keys_filename, 'r') as file: indexed_keys = file.read().splitlines() return indexed_keys except Exception as e: with open(keys_filename, 'w') as file: indexed_keys = [] return indexed_keys def get_json_data(url, headers, params, proxies): try: response = requests.get(url = url, headers=headers,params = params, proxies=proxies).json() return response except Exception as e: print(e) sys.exit(1) def write_to_file(complete_name, data): with open(complete_name, 'a') as f: json.dump(data, f) f.write('\n') def stream_to_splunk(json_response,indexed_keys, complete_name): try: keys_to_be_indexed = [] events_to_be_indexed = [] for item in json_response['data']: keys_to_be_indexed.append(item['id']) if item['id'] not in indexed_keys: write_to_file(complete_name = complete_name, data = item) events_to_be_indexed.append(item) print(json.dumps(events_to_be_indexed, indent = 4, sort_keys = True)) if len(events_to_be_indexed) else 1==1 return keys_to_be_indexed except Exception as e: print(e) def main(): try: json_response = get_json_data(url = url, headers = headers, params = params, proxies = proxies) indexed_keys = get_indexed_key(keys_filename = keys_filename) keys_to_be_indexed = stream_to_splunk(json_response = json_response, indexed_keys = indexed_keys, complete_name = complete_name) write_new_keys_in_file(keys_filename = keys_filename, keys_to_be_indexed = keys_to_be_indexed) except Exception as e: print(e) if __name__ == "__main__": main()      
Hi, I created a new trial account and want to work my way through one of the workshops to learn about the product.  Where/how can I access the EC2 instance that is spun up by default, so I can in... See more...
Hi, I created a new trial account and want to work my way through one of the workshops to learn about the product.  Where/how can I access the EC2 instance that is spun up by default, so I can install the collector?  I haven't been provided with an IP address or details where to find it. Thanks for any help! Rgds 
I am using version 8.2.3 Build: cd0848707637 Settings / data input / I entered ”dashboard” in the find box. And selected dashboard studio. I get the dashboard studio page. From the key features, I ... See more...
I am using version 8.2.3 Build: cd0848707637 Settings / data input / I entered ”dashboard” in the find box. And selected dashboard studio. I get the dashboard studio page. From the key features, I select dashboards. Then I click on create new dashboard. I get a popup that ask for; Dashboard title Edit id Description Permissions I then click create and get this message “You must select a dashboard type. “. My question is where do I select the dashboard type?
I am trying to work on props.conf to parse and break correctly.I am pushing data using CURL commands but it is sending 50 logs in one event.It worked through UI but failing when sent from CURL comman... See more...
I am trying to work on props.conf to parse and break correctly.I am pushing data using CURL commands but it is sending 50 logs in one event.It worked through UI but failing when sent from CURL commands.I want to break it into individual events .Only the first event start with   "{"sourcetype": "json","event": {" AND ends with "last_updated" (EXAMPLE:"last_updated": "2022-03-24T02:35:41.148727Z" },) .Rest of the events START WITH ID and end with last_updated....There are lot of nested ID in the event which I did not post but the syntax should be something that will break after last_updated   I want the events to BREAK AFTER THE "last_updated"  followed by closed flower brackets and the new event should start from  NOTE:ONLY THE first event start is different ..rest all events start with id and end with last_updated.   I tried BREAK_ONLY_BEFORE=\"\w*\"\:\s\"\d*\-\d*\-\d*\w\d*\:\d*\:\d*\.\d*\w\" ... but its not breaking correctly { "id":    Following are the sample events that I want to break Event1:   {"sourcetype": "json","event": { . . . . . }, "created": "2022-02-07", "last_updated": "2022-03-24T02:35:41.083145Z"   Event 2:   { "id": 150749, "name": "no hostname 1660322000234", . . . . . "created": "2022-02-07", "last_updated": "2022-03-24T02:35:41.148727Z" }   I used the below props...it worked uploading sample file via GUI but when I used this sourcetype in CURL through HEC it is not breaking. [ Netbox ] CHARSET=UTF-8 DATETIME_CONFIG=CURRENT LINE_BREAKER=([\r\n]+)\s+{ MUST_BREAK_BEFORE=\"\w*\"\:\s\"\d*\-\d*\-\d*\w\d*\:\d*\:\d*\.\d*\w\" NO_BINARY_CHECK=true SHOULD_LINEMERGE=false category=Custom disabled=false pulldown_type=true   CURL: curl -k http://10.xx.xx.xx:8088/services/collector/event -H 'Authorization: Splunk <TOKEN>' -d '{"sourcetype": "Netbox","event": '"$SITEINFO"'}'  
I am pulling Azure billing Subscriptions data from Microsoft Azure Add on for Splunk  it is only pulling 1000 records per interval(7200)  and  some time not getting any data  Can some one help on t... See more...
I am pulling Azure billing Subscriptions data from Microsoft Azure Add on for Splunk  it is only pulling 1000 records per interval(7200)  and  some time not getting any data  Can some one help on this?   Thanks in advance 
I know this question has been asked a lot before, and I've tried implementing the answers, but I must be doing something wrong because it is not working for me. I have a graph with rooms on the x axi... See more...
I know this question has been asked a lot before, and I've tried implementing the answers, but I must be doing something wrong because it is not working for me. I have a graph with rooms on the x axis and count on the y axis. Each room could be in any of 6 buildings in the data set. I want to color the bars based on the building that the room is in.    This is my search index="example"  Point_Name=Count  |$v_hours$ |where isnum(value) |stats max(value) as PeakCount by Building, Room |eval building=case(Building=="Building1", "B1") |sort -PeakCount |rename PeakCount as Count |head 10 |table Room, Count   This is my xml     <option name="charting.axisLabelsX.majorLabelStyle.overflowMode">ellipsisNone</option> <option name="charting.axisLabelsX.majorLabelStyle.rotation">-90</option> <option name="charting.axisTitleX.visibility">visible</option> <option name="charting.axisTitleY.visibility">visible</option> <option name="charting.axisTitleY2.visibility">visible</option> <option name="charting.axisX.scale">linear</option> <option name="charting.axisY.scale">linear</option> <option name="charting.axisY2.enabled">0</option> <option name="charting.axisY2.scale">inherit</option> <option name="charting.chart">column</option> <option name="charting.chart.bubbleMaximumSize">50</option> <option name="charting.chart.bubbleMinimumSize">10</option> <option name="charting.chart.bubbleSizeBy">area</option> <option name="charting.chart.nullValueMode">gaps</option> <option name="charting.chart.showDataLabels">none</option> <option name="charting.chart.sliceCollapsingThreshold">0.01</option> <option name="charting.chart.stackMode">default</option> <option name="charting.chart.style">shiny</option> <option name="charting.drilldown">all</option> <option name="charting.seriesColors">{"B1":0xFF0000}</option> <option name="charting.layout.splitSeries">0</option> <option name="charting.layout.splitSeries.allowIndependentYRanges">0</option> <option name="charting.legend.labelStyle.overflowMode">ellipsisMiddle</option> <option name="charting.legend.placement">right</option> <option name="height">462</option>       If I table building, count then B1 is labeled on the xaxis under the correct bar, but the color still does not change. What am I missing?
Hi, I would like to create a chart of the error rate over time.  I have data that shows status= DOWNLOAD_COMPLETE  and status = FAILD I can calculate this for a point in time with the below. But ca... See more...
Hi, I would like to create a chart of the error rate over time.  I have data that shows status= DOWNLOAD_COMPLETE  and status = FAILD I can calculate this for a point in time with the below. But can anyone help get the error rate over time?   logType=error OR logType=service context=retrieve status=DOWNLOAD_COMPLETE OR status=FAILED | stats count(correlationId) as total_count by status | transpose header_field=status | eval errorRate=FAILED/(FAILED+DOWNLOAD_COMPLETE)*100 | table DOWNLOAD_COMPLETE, FAILED, errorRate  
Hello, Does anyone have any idea why this keeps occuring? It happens to me about every 10 minutes. The session timeout is set to 60 minutes. We use SAML with Okta for authentication. I asked the Okt... See more...
Hello, Does anyone have any idea why this keeps occuring? It happens to me about every 10 minutes. The session timeout is set to 60 minutes. We use SAML with Okta for authentication. I asked the Okta personnel and they said they have a 2 hour time out session.    Any help is greatly appreciated! V/r, mello920
So, I am trying to use a lookup table spammer.cvs to filter out results from my search but can't get the filtering logic down to make it work completely. Table A1Sender, A1Sender_domain, A2Sender, ... See more...
So, I am trying to use a lookup table spammer.cvs to filter out results from my search but can't get the filtering logic down to make it work completely. Table A1Sender, A1Sender_domain, A2Sender, A2Sender_domain, Recipient{} fred@flintstone.com, ,tinker@sbuggy.com, , ,  ,*@bbunny.com,mmouse@wd.com, , ,  ,*@wd.com, ,*@bbunny.com, ,  , , , ,myemail@me.com I can get this to work; {my search} | search NOT [ | inputlookup spammer.csv | fields A1Sender, A2Sender] | table _time, A1Sender,  A2Sender How do I code something like; {my search} | search NOT [ | inputlookup spammer.csv | fields A1Sender, A2Sender | fields A1Sender_domain, A2Sender | fields A1Sender_domain, A2Sender_domain | fields Recipient{}] | table _time, A1Sender,  A2Sender
Hi I would like to know if it is possible to use a bin span with now() like with _time? bin _time span=1h Thanks 
|>TYPE|2022-04-25 18:38:40|2d7e908bo82cb8|1725357403659|HERE|TYPE/272|1,856|1.2.0|ABC|351c481f2de|NONE<| |>TYPE|2022-04-25 18:38:19|8e61ty7ebd2c25|1725357403659|THERE|TYPE/272||1.2.0|ABCD|4552aa7f... See more...
|>TYPE|2022-04-25 18:38:40|2d7e908bo82cb8|1725357403659|HERE|TYPE/272|1,856|1.2.0|ABC|351c481f2de|NONE<| |>TYPE|2022-04-25 18:38:19|8e61ty7ebd2c25|1725357403659|THERE|TYPE/272||1.2.0|ABCD|4552aa7f9ebd704a91c8|{authType}|{ "message": { "number": "1856345" }, "transaction": { "sample1": "value1", "sample2": "value2" }}<|<| I am looking for collecting data from both of above message and correlate between two. I am looking for numbers 272 and 1,856 from HERE and looking for sample1 and sample2 from THERE both HERE and THERE will have 272 common and that is the only one. build a table between those two with sample1, sample2, 1,856
Hello Splunkers, How can i rename all the OrderNumber1, OrderNumber2, OrderNumber3 as OrderNumber. And Country1, Country2,Country4 as Country. I have attached the screenshot also.    Appreciate... See more...
Hello Splunkers, How can i rename all the OrderNumber1, OrderNumber2, OrderNumber3 as OrderNumber. And Country1, Country2,Country4 as Country. I have attached the screenshot also.    Appreciated in advance    
Hello there,  For context, I got some remote logs from different sources on my universal forwarder, and I'm trying to index logs from the same source with the same index, but the thing is, I don't ... See more...
Hello there,  For context, I got some remote logs from different sources on my universal forwarder, and I'm trying to index logs from the same source with the same index, but the thing is, I don't know how to that I tried to follow this Create custom indexes - Splunk Documentation, but even trying to index everything that comes from my universal forwarder using this [tcp://<ip_addr_forwarder] index=<name_of_index] can someone help please ? Best regards,
is there away we can search for a ,  to find multi locale or multi country basically instead of the underlined index=personmetrics logtype=personactivity wrk_grp="Ret,Ce" locale="en-US,en-GB" ... See more...
is there away we can search for a ,  to find multi locale or multi country basically instead of the underlined index=personmetrics logtype=personactivity wrk_grp="Ret,Ce" locale="en-US,en-GB" 1.  how do we write? index=ccpmetrics logtype=ccpactivity (wrk_grp LIKE "," OR locale LIKE ",") |table personname,wrk_grp,locale 2. bonus point: and then find the stats of personname and corresoponding entries.
How do I extract all fields from userdata?   accept=application/json, timestamp=1651243086870} OutboundWebHookPayload={"clientType":"Client","mediaType":"ask","subject":"EscapeClient","userData"... See more...
How do I extract all fields from userdata?   accept=application/json, timestamp=1651243086870} OutboundWebHookPayload={"clientType":"Client","mediaType":"ask","subject":"EscapeClient","userData":{"country":"UK","lastName":"ELMER","agentId":"7060856","conversationId":"conv_1d55ec01e970c8833e8b8206be287fce","sessionId":"itc_58f7ad65-fcb0-46bd-81-1717f84dd7","chatSessionId":"s_eaf99b35-59fd-4d36-8f8f-c6423f8ec610","locale":"en-GB","languageCode":"en","experience":"Default","publicGuid":"1d55ec01e970c8833e8b8206be287fce","accountNumber":"XXXXXXXXXXXXXXX","firstName":"LUKE","environment":"prod","intentCode":"statement_balance","upfrontRoutingIntent":"CardServices","InteractionType":"Resume","customerId":"508558871407","channelName":"MApp","ProductType":" Card"}}     I tried   userData | rex field=_raw "userData.:{.IACode.:.(?<IACode>[A-Za-f0-9]+).,.country.*upfrontRoutingIntent.:.(?<upfrontRoutingIntent>[^\"]+).," | table IACode upfrontRoutingIntent   But I need other fields like Experience and Product type as well  
Hello,  I'm having troubles creating a dashboard panel that can list values inserted by other users. The panel has an input field where users will put specific ip addressess that mast be added to th... See more...
Hello,  I'm having troubles creating a dashboard panel that can list values inserted by other users. The panel has an input field where users will put specific ip addressess that mast be added to this "list".  The only solution i came up with is a lookup file that will be updated with new rows every time a user adds a value as input. I have tried this query that i saw on https://blog.avotrix.com/how-to-add-new-fields-in-existing-lookup-file/ :  | inputlookup ip_sospetti append=true | append [| stats count | eval IP="$added_ip_token$" | table IP] | outputlookup ip_sospetti.csv This search adds just one value to the lookup file and when a new input is added it changes the last value inserted. Do you guys have a better solution or maybe an idea to make this query work? Thanks a lot.