All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi, We have close to 1000 indexers in our splunk cluster on AWS. Each indexer has 15TB SSD local storage. Our retention is 30 days and we enable smartstore with AWS S3. The total s3 bucket size f... See more...
Hi, We have close to 1000 indexers in our splunk cluster on AWS. Each indexer has 15TB SSD local storage. Our retention is 30 days and we enable smartstore with AWS S3. The total s3 bucket size for our cluster says it is around 9 PB, however the disk usage on almost all of our indexers is around 95% which leads to (1000 * 0.95 * 15 TB) = 14.2 PB.  What is taking up additional ~5 PB of disk space on indexers? I'm sure the hot data(which isn't on s3) is definitely not of 2.5 PB (RF =2) size. Can someone please throw some light here? Thanks.
Hi sorry for my direct question. This match it's in eval and i get the error "Regex: quantifier doesn't follow a repeatable item". Do u know where it's the issue? Thank u
Hi, I need help to fine tuned my SPL Query. _time field is not properly formatted when we configure it in dashboard. index=sslvpn sourcetype="sslvpnsourcetype" action=failure | iplocation access... See more...
Hi, I need help to fine tuned my SPL Query. _time field is not properly formatted when we configure it in dashboard. index=sslvpn sourcetype="sslvpnsourcetype" action=failure | iplocation accessIP | search Country ="Canada" | stats values(accessIP), count by user, _time, reason | eval _time=strftime(_time, "%d/%m/%Y %I:%M:%S %p") | table _time, user, values(accessIP), reason, count | rename user as Username, values(accessIP) as "Access IP", reason as "Reason", count as Count This is the result table(_time column) when running on search and reporting app: This is the result (_time column) when we configure in dashboard (Dashboard Studio): Please assist us. Thank you.          
Bad Request — editTracker failed, reason='Unable to connect to license master=https://172.31.17.138:8089 Error connecting: Connect Timeout'
Hi Fellow Splunkers, Good day. I am currently migrating some applications from On-Prem to Splunk Cloud. From app vetting, would anyone be able to suggest of possible fixes/resolutions for this  che... See more...
Hi Fellow Splunkers, Good day. I am currently migrating some applications from On-Prem to Splunk Cloud. From app vetting, would anyone be able to suggest of possible fixes/resolutions for this  check_hotlinking_splunk_web_libraries check? The errors points to JS files that works well in On-Prem and are in their correct location in packaging an application (appserver/static)   Name: check_hotlinking_splunk_web_libraries Description: Check that the app files are not importing files directly from the search head. Details: Embed all your app's front-end JS dependencies in the /appserver directory. If you import files from Splunk Web, your app might fail when Splunk Web updates in the future. Bad imports: ['vizapi/SplunkVisualizationBase', 'vizapi/SplunkVisualizationUtils'] File: /tmp/tmp4bxeox7h/splunk_app/appserver/static/visualizations/VUmeter/src/visualization_source.js   Appreciate any help/advise. Thank you.
Below is my query1: index=adc source=abc "FilesTrasfered DO980" |timechart span=1d count |stats count as D0980 Files query2: index=adc source=abc "FilesTrasfered DO981" |timechar... See more...
Below is my query1: index=adc source=abc "FilesTrasfered DO980" |timechart span=1d count |stats count as D0980 Files query2: index=adc source=abc "FilesTrasfered DO981" |timechart span=1d count |stats count as D0981Files i tried to combine 2 queries and get the result in table format, so i used append command, but i am getting result  in 2 different rows. DO980 Files DO981 Files 500     230 But i want to get the results in the same row like shown in below format: DO980 Files DO981 Files 500 230
Hi,   I have set of events from an index with user details as below and I am looking to populate the events with there manager Name . ID Name MgrID 1 Tom 4 2 Rick 1 ... See more...
Hi,   I have set of events from an index with user details as below and I am looking to populate the events with there manager Name . ID Name MgrID 1 Tom 4 2 Rick 1 3 Harry 1 4 Boss 5 5 CEO 5   I want to add another column to the result with MgrName like below using the MgrID and re-referencing the same index again: ID Name MgrID MgrName 1 Tom 4 Boss 2 Rick 1 Tom 3 Harry 1 Tom 4 Boss 5 CEO 5 CEO 5 CEO   Tried to come up with something and so far no luck, appreciate if someone has any suggestions or have done this before. Thanks
Hello all, I have a search that's something like this:       index=* sourcetype=* ID=* (value=1 OR value=2 OR value=3) | list(_raw) as events BY ID value msg | table ID value msg     ... See more...
Hello all, I have a search that's something like this:       index=* sourcetype=* ID=* (value=1 OR value=2 OR value=3) | list(_raw) as events BY ID value msg | table ID value msg       Next, I utilize a drilldown option that adds the chosen value into a new search. Basically:       index=* sourcetype=* ID=* value=1 | table ID value msg       The point is to group events into one list based on them having the same ID and a specific value. Now, when I click the drilldown sometimes the table will include fields of value=1 that contain a "msg" field that is irrelevant to the data I'm searching for.  Is it possible to do something like:       index=* sourcetype=* ID=* value=1 | table ID value msg | eval msg=if(msg==bad, "Remove From Table", msg)       Sorry for being vague, but I cannot post the actual searches. I hope this makes sense.  
Hi all, So lets suppose I have the following table: Job_ID  Parameter_A  Parameter_B 1                   "Car"                     "Red" 2                    "Bus"                    "Blue" ... See more...
Hi all, So lets suppose I have the following table: Job_ID  Parameter_A  Parameter_B 1                   "Car"                     "Red" 2                    "Bus"                    "Blue"   I want to get value "Red" and use it in an eval function. How to do it? Thanks!   @bowesmana calling you for help like always!!!
i want to pass the input token to my base search. In the panel its shows no results found, but when try click on "open in search" i can able to find the result. whats the issue here?   <field... See more...
i want to pass the input token to my base search. In the panel its shows no results found, but when try click on "open in search" i can able to find the result. whats the issue here?   <fieldset submitButton="false"> <input type="dropdown" token="Month"> <label>Month</label> <fieldForLabel>date</fieldForLabel> <fieldForValue>date</fieldForValue> <search> <query>| inputlookup test.csv | table date</query> <earliest>-24h@h</earliest> <latest>now</latest> </search> <search id="bs1"> <query>index=xyz source=*.csv |eval Date="$Month$"|eval Date1=date_year + date_month |where Date1=Date |lookup test.csv date as Date OUTPUT source as Source |where source=Source</query> <earliest>0</earliest> <latest></latest> </input> </fieldset> panel> <single> <title>PRODUCTION</title> <search base="bs1"> <query>|search me |stats sum(me) as i </query> </search> <option name="drilldown">none</option> <option name="refresh.display">progressbar</option> </single>  
Hi folks,   How can I search data of my ES SH from the SHC (splunk cloud). Is there a way to do so? I'm trying to use the | rest /servicesNS/-/-/saved/searches query from my SHC to search the s... See more...
Hi folks,   How can I search data of my ES SH from the SHC (splunk cloud). Is there a way to do so? I'm trying to use the | rest /servicesNS/-/-/saved/searches query from my SHC to search the saved searches from my ES SH but I was unable to do so, it seems there is now way of dispatching REST to the ES SH. What about if I create a summary index with the output of the  | rest /servicesNS/-/-/saved/searches in the ES SH. Will I be able to search that data from my SHC? I appreciate your help.  
Hi, checking to see if anyone uses Splunk to monitor their Proofpoint message queues. If so, how are you doing this via the product. Thanks
Hi, checking to see if anyone uses Splunk to monitor their Proofpoint message queues. If so, how are you doing this via the product. Thanks
Hi  I have two questions here  1.In the drill down search i have given dest=$dest$ and it is not working and when i click on contributing link it is reflecting the same.  2. When i click on dri... See more...
Hi  I have two questions here  1.In the drill down search i have given dest=$dest$ and it is not working and when i click on contributing link it is reflecting the same.  2. When i click on drilldown search it is taking me to the search window with the time range as last 30 mins but what i expect is the  custom timerange when the event got triggered.  i kept offset values to default  Please let me know. Thanks  
I just went through this so posting here as I could not find the commands to fix it and had to open a ticket with support. Root cause was one of my servers had the server.pem certificate expired. Thi... See more...
I just went through this so posting here as I could not find the commands to fix it and had to open a ticket with support. Root cause was one of my servers had the server.pem certificate expired. This prevented the KV Store upgrade from working during the upgrade. This was on a windows heavy forwarder. Those are standalone, all the docs have commands for a SH cluster. So here are the standalone commands, hoping this helps someone. Took me a while to get them out of support. ./splunk migrate kvstore-storage-engine --target-engine wiredTiger ./splunk migrate migrate-kvstore ./splunk show kvstore-status --verbose Thanks, Lee.
We need to index logfiles from our monitored devices which are partitioned into two segments.  The first segment is CSV.  The last segment are events.   col1,col2,col3,…,colN col1,col2,col3,… ,... See more...
We need to index logfiles from our monitored devices which are partitioned into two segments.  The first segment is CSV.  The last segment are events.   col1,col2,col3,…,colN col1,col2,col3,… ,colN col1,col2,col3,… ,colN . . . event1 event2 event3 … eventN   This data from the logfile needs to be sent to one index with a two sourcetypes.  Sourcetype_csv for the first segment in the logfile and sourcetype_events for the last segment in the logfile.  How do we structure the inputs.conf, props.conf, and transforms.conf for this?   We were thinking we could leverage filtering to take advantage that the events are all prefixed and postfixed with “***”.  However, there does not seem to be a way to have the one logfile type partitioned into more than one sourcetype.
Hello Everyone, I am ingesting data from Azure EventHub to Splunk using Splunk Microsoft Cloud Service Add-On. Now I am looking for an App which I can use to visualize these events. Can anyone pl... See more...
Hello Everyone, I am ingesting data from Azure EventHub to Splunk using Splunk Microsoft Cloud Service Add-On. Now I am looking for an App which I can use to visualize these events. Can anyone please suggest any Pre-build dashboards for EventHub data. Thanks,
Hello, I have a tcp stream incoming with xml Call Data Records (CDR).  enclosed at the end is an example. The CDR contains information about the caller and destination phone.  There are several SED... See more...
Hello, I have a tcp stream incoming with xml Call Data Records (CDR).  enclosed at the end is an example. The CDR contains information about the caller and destination phone.  There are several SEDCMDs in the props.conf to take lines like <party type="orig"... and convert them into <orig_party... The problem is with lines whose only differentiation is their position in the data structure.  With the xml lines that start with "<RTCPstats>  I need to modify their fields:  I need the first line to be ingested as "<orig_PS>31367</orig_PS> <orig_OS>6273400</orig_OS>..." and the second line as "<term_PS>31366</term_PS>, <term_OS>6273200</term_OS>...". <flowinfo>     <RTCPstats>PS=31367, OS=6273400,..>   </flowinfo> <flowinfo>     <RTCPstats>PS=31366, OS=6273200,...>    </flowinfo> The actual sed command "sed -r -e '0,/ ?([A-Z_]*)=([0-9]*)/s//<ORIG_\1>\2<\/ORIG_\1>/g'" will do this, but the same entry in the SEDCMD will not.   Altering a line of input is easy: altering only the FIRST instance in a record with embedded newlines is not. What are my options?   SEDCMD  in props.conf? strip out all newlines so sedcmd treats it all as one? (can't really do it w. sed...) regex in transforms.conf? pass input through a script that CAN do this?  Some details: The linebreaker is two carriage returns (literally \r\r, or 0d0d).   There are embedded newlines in each record so any SEDCMD will be applied to each line, not the entire record all at once. I get 60,000 records per minute: this transformation needs to be fast. Sample record: (spaced out neatly)         <e> <st>Perimeta XML CDR</st> <h>the perimeta hostname</h> <t>1664838107186</t> <tid>2814754955435820</tid> <sid>2082</sid> <eid>CDR</eid> <call starttime="1664837476918" starttime_local="2022-10-03T15:51:16-0700" endtime="1664838107179" endtime_local="2022-10-03T16:01:47-0700" duration="630261" release_side="term" bcid="a big string of numbers and letters"> <party type="orig" phone="caller phonenumber" domain="ipaddr1" sig_address="ipaddr1" sig_port="5060" sig_transport="udp" trunk_group="trunkgroupname" trunk_context="nap" sip_call_id="anumber@adestination"/> <party type="term" phone="destination phonenumber" domain="0.0.0.0" routing_number="a routing number" sig_address="<an ip addr>" sig_port="5060" sig_transport="udp" trunk_group="6444" trunk_context="itg" edit_trunk_group="" edit_trunk_context="" sip_call_id="adifferentnumber@adifferentdestination"/> <adjacency type="orig" name="orig_adjacency_system" account="" vpnid="0X00000001" mediarealm="CoreMedia1"/> <adjacency type="term" name="dest_adjacency_system" account="" vpnid="0X00000004" mediarealm="CoreMedia1"/> <category name="cat.sbc.redirected"/> <connect time="1664837483144" time_local="2022-10-03T15:51:23-0700"/> <firstendrequest time="1664838107158" time_local="2022-10-03T16:01:47-0700"/> <disconnect time="1664838107179" time_local="2022-10-03T16:01:47-0700" reason="0"/> <redirector bcid="another string of letters and numbers" editphone="a phone number"/> <post_dial_delay duration="2895"/> <QoS stream_id="1" instance="0" reservetime="1664837476918" reservetime_local="2022-10-03T15:51:16-0700" committime="1664837483144" committime_local="2022-10-03T15:51:23-0700" releasetime="1664838107184" releasetime_local="2022-10-03T16:01:47-0700"> <gate> <flowinfo> <local address="an ip address" port="63130"/> <remote address="another ip address" port="36214"/> <sd>m=audio 0 RTP/AVP 0 a=rtpmap:0 PCMU/8000 a=ptime:20 </sd> <RTCPstats>PS=31367, OS=6273400, PR=31366, OR=6273200, PD=0, OD=0, PL=0, JI=0, TOS=0, TOR=0, LA=0, PC/RPS=31165, PC/ROS=4986400, PC/RPR=31367, PC/RPL=0, PC/RJI=0, PC/RLA=0, RF=91, MOS=43, PC/RAJ=0, PC/RML=0</RTCPstats> </flowinfo> <flowinfo> <local address="an ip address" port="19648"/> <remote address="a diffrent ip address" port="26046"/> <sd>m=audio 0 RTP/AVP 0 a=rtpmap:0 PCMU/8000 a=ptime:20 </sd> <RTCPstats>PS=31366, OS=6273200, PR=31367, OR=6273400, PD=0, OD=0, PL=0, JI=0, TOS=0, TOR=0, LA=0, PC/RPS=0, PC/ROS=0, PC/RPR=0, PC/RPL=0, PC/RJI=0, PC/RLA=0, RF=82, MOS=41, PC/RAJ=0, PC/RML=0</RTCPstats> </flowinfo> </gate> </QoS> </call> </e>          
Hello, While sending alerts to xmatters encountering the below errors: ERROR [xmatters.alert_action.main] [xmatters] [<module>] [62654] <urlopen error [Errno 97] Address family not supported by p... See more...
Hello, While sending alerts to xmatters encountering the below errors: ERROR [xmatters.alert_action.main] [xmatters] [<module>] [62654] <urlopen error [Errno 97] Address family not supported by protocol> Traceback (most recent call last): File "/splunk/lib/python3.7/urllib/request.py", line 1350, in do_open encode_chunked=req.has_header('Transfer-encoding')) File "/splunk/lib/python3.7/http/client.py", line 1281, in request self._send_request(method, url, body, headers, encode_chunked) File "/splunk/lib/python3.7/http/client.py", line 1327, in _send_request self.endheaders(body, encode_chunked=encode_chunked) File "/splunk/lib/python3.7/http/client.py", line 1276, in endheaders self._send_output(message_body, encode_chunked=encode_chunked) File "/splunk/lib/python3.7/http/client.py", line 1036, in _send_output self.send(msg) File "/splunk/lib/python3.7/http/client.py", line 976, in send self.connect() File "/splunk/lib/python3.7/http/client.py", line 1443, in connect super().connect() File "/splunk/lib/python3.7/http/client.py", line 948, in connect (self.host,self.port), self.timeout, self.source_address) File "/splunk/lib/python3.7/socket.py", line 728, in create_connection raise err File "/splunk/lib/python3.7/socket.py", line 711, in create_connection sock = socket(af, socktype, proto) File "/splunk/lib/python3.7/socket.py", line 151, in __init__ _socket.socket.__init__(self, family, type, proto, fileno) OSError: [Errno 97] Address family not supported by protocol During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/splunk/etc/apps/xmatters_alert_action/bin/xmatters.py", line 147, in <module> REQUEST_ID = XM_ALERT.execute() File "/splunk/etc/apps/xmatters_alert_action/bin/xmatters.py", line 119, in execute request_id = xm_client.send_event(self.endpoint_url, xm_event) File "/splunk/etc/apps/xmatters_alert_action/lib/xmatters_sdk/xm_client.py", line 62, in send_event force_https=True File "/splunk/etc/apps/xmatters_alert_action/lib/common_utils/rest.py", line 159, in post return self._send_request(req, headers) File "/splunk/etc/apps/xmatters_alert_action/lib/common_utils/rest.py", line 99, in _send_request res = urlopen(req) File "/splunk/lib/python3.7/urllib/request.py", line 222, in urlopen return opener.open(url, data, timeout) File "/splunk/lib/python3.7/urllib/request.py", line 525, in open response = self._open(req, data) File "/splunk/lib/python3.7/urllib/request.py", line 543, in _open '_open', req) File "/splunk/lib/python3.7/urllib/request.py", line 503, in _call_chain result = func(*args) File "/splunk/lib/python3.7/urllib/request.py", line 1393, in https_open context=self._context, check_hostname=self._check_hostname) File "/splunk/lib/python3.7/urllib/request.py", line 1352, in do_open raise URLError(err) urllib.error.URLError: <urlopen error [Errno 97] Address family not supported by protocol> Any clue where i might be wrong?
hi   our system logs test runs as single events. in some cases we would have a re-run of a test. both events are logically related but are separate for each run (the original run and the re-run).... See more...
hi   our system logs test runs as single events. in some cases we would have a re-run of a test. both events are logically related but are separate for each run (the original run and the re-run). I wish to extract data from both events and present it together, have tried several approaches but none worked so far.   step 1: identifying the re-run event and getting a unique identifier for the original run using some textual parsing on the workarea path: index=my_index aa_data_source="my_info" is_rerun=True | eval orig_workarea=workarea_path | rex field=orig_workarea mode=sed "s@/rerun?$@@"   step 2: now, I would like to find and match the original run event for each of the results. tried map: | map search="search index=my_index aa_data_source=my_info workarea_path=$orig_workarea$ " maxsearches=100000   this is probably wrong because it is both resource expensive and after I found the original event per result, I could only use the data of the original event (result of map) - didnt find how to combine it with the re-run event data I searched upon.   I also tried subsearch in various ways, the main problem is that the subsearch cannot use the "orig_workarea" I extract from the primary search because it runs first.   step 3 would be present the results from both events together. meaning - take field_from_eventA, field_from_eventB and place them in the same raw (note that renaming might be required for the fields since both events have the same fields)   kind of in a dead end here, could use ideas on how to implement this search. any ideas are welcome   thanks, noam