All Topics

Top

All Topics

Hi All, I don't have much experience with Splunk. My JSON payload looks like as shown below. The msg.details array can have any number key/value pairs in any order.     { "appName": "TestAp... See more...
Hi All, I don't have much experience with Splunk. My JSON payload looks like as shown below. The msg.details array can have any number key/value pairs in any order.     { "appName": "TestApp", "eventType": "Response", "msg": { "transId": "Trans1234", "status": "Success", "client": "clientXyz", "responseTime": 1650, "details": [ { "keyName": "rtt", "keyValue": 2778 }, { "keyName": "trace", "keyValue": 97007839130680 } ], "url": "/v1/test" } }      I am trying to write a query and form a table as shown below. I am interested in displaying only the keyValue of keyName:trace in the table. Any help is appreciated. Thanks. index=* appName="TestApp" msg.url="/v1/test" | table msg.transId, msg.status, msg.details[keyName="trace"].keyValue msg.transId msg.status msg.details[keyName="trace"].keyValue Trans1234 Success 97007839130680 Trans7890 ERROR 29411645500355  
Hi Splunkers, I'm trying to get the Splunk interface up but with no luck. When I restart Splunk, it is getting stuck in the last step: Waiting for web server at https://127.0.0.1:8000 to be avai... See more...
Hi Splunkers, I'm trying to get the Splunk interface up but with no luck. When I restart Splunk, it is getting stuck in the last step: Waiting for web server at https://127.0.0.1:8000 to be available................................... I then noticed that when I run the following command: netstat -an | grep 8000 The output is: tcp 13 0 0.0.0.0:8000 0.0.0.0:* LISTEN tcp 518 0 10.242.13.20:8000 10.201.184.6:63670 CLOSE_WAIT tcp 518 0 10.242.13.20:8000 10.201.184.6:63648 CLOSE_WAIT tcp 518 0 10.242.13.20:8000 10.201.184.6:63717 CLOSE_WAIT tcp 155 0 127.0.0.1:8000 127.0.0.1:51036 CLOSE_WAIT tcp 518 0 10.242.13.20:8000 10.201.184.6:63743 CLOSE_WAIT tcp 518 0 10.242.13.20:8000 10.201.184.6:63742 CLOSE_WAIT tcp 518 0 10.242.13.20:8000 10.201.184.6:64055 CLOSE_WAIT tcp 518 0 10.242.13.20:8000 10.201.184.6:63669 CLOSE_WAIT tcp 518 0 10.242.13.20:8000 10.201.184.6:63649 CLOSE_WAIT tcp 518 0 10.242.13.20:8000 10.201.184.6:63730 CLOSE_WAIT tcp 518 0 10.242.13.20:8000 10.201.184.6:63718 CLOSE_WAIT tcp 518 0 10.242.13.20:8000 10.201.184.6:63731 CLOSE_WAIT tcp 518 0 10.242.13.20:8000 10.201.184.6:64056 CLOSE_WAIT   After some troubleshooting I have tried to set the "connection_host" in the inputs.conf to "none" but with no luck. Then I tried to increase default values for the web threadpool in the web.conf but with no luck also. I even tried to restart the server itself but with no luck.   Any thoughts that can help me?
Hi, I am struggling with following task. I have a lookup file containing all the configured dhcp scopes in the following format.  ScopeId SubnetMask Name State StartRange EndRange LeaseDuration I... See more...
Hi, I am struggling with following task. I have a lookup file containing all the configured dhcp scopes in the following format.  ScopeId SubnetMask Name State StartRange EndRange LeaseDuration In the dhcp.log i have the ip address for a client.  I need the ScopeID and the LeaseDuration for each client.  My idea is to look if the given IP Address is within StartRange and EndRange and get the ScopeID and LeaseDuration. My problem is I don't have a clue on how to do so. Any Ideas? thanks  Alex
Hi team, in my environment  lets say say i have 1000 forwarders so in order to set deployment poll i can not go in each every forwarder and set ./splunk set deploy-server DS IP:8089.   in this scen... See more...
Hi team, in my environment  lets say say i have 1000 forwarders so in order to set deployment poll i can not go in each every forwarder and set ./splunk set deploy-server DS IP:8089.   in this scenario how we do set please clarify is there any automatic script?   thanks in advance
Hello,  i am looking to narrow down my search field, i only want to search for events that happen outside of  a specific time range. I want to search for events that happen outside of 0800 to 1700 ... See more...
Hello,  i am looking to narrow down my search field, i only want to search for events that happen outside of  a specific time range. I want to search for events that happen outside of 0800 to 1700 Any help would be appriceated  Kind regards 
Hi Community,  I am trying to generate a timechart by month with the following query:  index=xyz Question="zzz" NOT "Could not get results" NOT "No Deployment Results Found" NOT "No Matching Deploy... See more...
Hi Community,  I am trying to generate a timechart by month with the following query:  index=xyz Question="zzz" NOT "Could not get results" NOT "No Deployment Results Found" NOT "No Matching Deployments Found" NOT "Unable to load PatchLib" | sort Computer_Name, patchmeantime | stats max(patchmeantime) as MaxAge by Computer_Name | stats avg(MaxAge) as MTTP | timechart span=1mon avg(MTTP) But nothing is showing up, so I am pretty sure I am missing something critical or super simple here but not sure what it is..... Any help will be really appreciated. 
So I have a field named "domain" that has values of single domains (A, B, C) and combinations of domains with two different values. A B C A/B A/C A, B C, D I can successfully split the value... See more...
So I have a field named "domain" that has values of single domains (A, B, C) and combinations of domains with two different values. A B C A/B A/C A, B C, D I can successfully split the values by either "," or "/" with eval new_field1=(domain,",") but if I do another one after with eval new_field1=(domain,"/") or eval new_field2=(new_field1,"/") after it doesn't work. Is there a way to split by both "," and "/"
Hi All, Im struggeling  to remove everything before the date using SED  Example  |makeresults|eval_raw="Feb 2 14:27:50 test.dh.test.com named[123456]: [ID xxxxx local6.info] 02-Feb-2021 14:27:5... See more...
Hi All, Im struggeling  to remove everything before the date using SED  Example  |makeresults|eval_raw="Feb 2 14:27:50 test.dh.test.com named[123456]: [ID xxxxx local6.info] 02-Feb-2021 14:27:50.448 queries: info: client @172ac618 x.x.x.x#61721 (cd1ae518178748adbca4cff52b24b791.fp.measure.office.com): query: cd1ae51817ffdfdfhhh8748adbca4cff52b24b791.fp.measure.office.com IN A + (x.x.x.x)"    1- I want to remove everything up to 02-Feb-2021 ( the date should be included )        I tried this  rex mode=sed field=_raw "s/[^[0-9]{2}-[\w]{3}-[0-9]{4}]*//g"  NO LUCK Desired resullt would be something like this  02-Feb-2021 14:27:50.448 queries: info: client @172ac618 x.x.x.x#61721 (cd1ae518178748adbca4cff52b24b791.fp.measure.office.com): query: cd1ae51817ffdfdfhhh8748adbca4cff52b24b791.fp.measure.office.com IN A + (x.x.x.x)"    2- I want to keep only the last 3 parts of the domain  Example  cd1ae51817ffdfdfhhh8748adbca4cff52b24b791.fp.measure.office.com Desire result:        fp.measure.office.com I have a working rex that does that but how can i use SED to avoid indexing anything before the last 3 parts of the domain  | rex field=_raw "query:\s(?:[^\s.]+\.)*(?P<query_dns1>[^\s.]+\.[^\s.]+\.[^\s.]+)"   THANK YOU!!
Hi, I want to make a search out of events
I am trying to get network outage totals by domain. I have four domains: A, B, C, D. The problem is that sometimes there are outages that effect 2-3 domains that are reported as a different event and... See more...
I am trying to get network outage totals by domain. I have four domains: A, B, C, D. The problem is that sometimes there are outages that effect 2-3 domains that are reported as a different event and not by domain.  | chart count(event_id) as Count by Domain A                                           4 B                                           7 C                                           2 D                                           5 A, B                                      2 A, D                                      4 C, B                                      3 A, D, B                                 6 I want to display outages that effect each domain. So anything that includes A (A; A,B; A,D; A,D,B) will be added and the count for A will be 16. Same for the other domains. The end result should be: A                            16 B                            18 C                            5 D                            15 I've tried eval domain=if(domain=A OR domain="A, B" OR domain="A, D" OR domain="A, D, B", "A", domain) ....... but that only works for the first one. The combined domains aren't included in the totals for the subsequent if statements.
Hi, I have this table of data: Name Age Address Mark 21 1 st xxxxx Elisabeth 21 2 st xxxxx Jane 22 3 st xxxxx Bryan 24 4 st xxxxx   I ... See more...
Hi, I have this table of data: Name Age Address Mark 21 1 st xxxxx Elisabeth 21 2 st xxxxx Jane 22 3 st xxxxx Bryan 24 4 st xxxxx   I want to list only the elements having a specific age. Exp: list of person with Age=21 Name Age Address Mark 21 1 st xxxxx Elisabeth 21 2 st xxxxx   Thanks for your help.
Hi, I got these datas URI TXT Param A My text and others param 1 A My text and others param 2 A My text  param 3 A My text param 4 B My t... See more...
Hi, I got these datas URI TXT Param A My text and others param 1 A My text and others param 2 A My text  param 3 A My text param 4 B My text and others param 2 B My text param 3 C My text and others param 1 C My text param4 I'd like to extract the datas according these rules : - If URI = A, extract TXT that not contains "others" - Else extract all to obtain these results URI TXT Param A My text  param 3 A My text param 4 B My text and others param 2 B My text param 3 C My text and others param 1 C My text param4   I don't know how to build my request. I try to use "case" command like this but it return a logical error (Error in 'EvalCommand': The arguments to the 'like' function are invalid. ) : ... |eval TXT=case(URI="A", NOT LIKE("%others%"), 1=1, "others") ... How to use negative search in case command ?   Thanks
Hi Splunk Community, I am trying to work with over writing fields using an if clause. The data I have is like what is in the table below: Sourcetype Index LastSeenDate clarity... See more...
Hi Splunk Community, I am trying to work with over writing fields using an if clause. The data I have is like what is in the table below: Sourcetype Index LastSeenDate clarity-A abc123 2-6-2023 clarity-B abc123 1-15-2023 clarity-C abc123 12-1-2022 DR:101:405 abc123 2-4-2023 BillingTool abc123 2-2-2023   I want to overwrite the current LastSeenDates only for clarity-B and clarity-C so that their last seen date is equal to the LastSeenDate for clarity-A. The table below is an example of what I am trying to achieve: Sourcetype Index LastSeenDate clarity-A abc123 2-6-2023 clarity-B abc123 2-6-2023 clarity-C abc123 2-6-2023 DR:101:405 abc123 2-4-2023 BillingTool abc123 2-2-2023
I have logs with the following three fields: -category  -price  -requestID (unique per entry) I want to find all requestID's for entries that have BOTH the same category and price within a 1 ... See more...
I have logs with the following three fields: -category  -price  -requestID (unique per entry) I want to find all requestID's for entries that have BOTH the same category and price within a 1 hour time span. I started off with this query:  index=foo component="shop-service" | streamstats count as dupes by category, price | search dupes> 1 But I cannot seem to calculate the duplicate entries nor tie it to the requestID  
hi  team,   i am using below splunk search in dashboards query   index=BigIt log_severity=INFO or WARN app_name= test-cap-generator country_code=USA error_code= COA-1004 earliest=-5d rex " to... See more...
hi  team,   i am using below splunk search in dashboards query   index=BigIt log_severity=INFO or WARN app_name= test-cap-generator country_code=USA error_code= COA-1004 earliest=-5d rex " total number where indicator I is Z(?<Counts>\d)" | stats count by _time, Counts | table _time,Counts Requirement : i have to filter runs which are occuring once..  daily and ignore the duplicate runs which are showing up in splunk query any suggestion please how can i ignore those duplicate runs anything which ran after 07:00 am should be ignored for that particular day   for example in below image for date 2023-02-02 i have to filter only  06:28 run   
Issue happens after windows server is restarted. Restarting splunk universal forwarder fixes the issue. Either the component that raises this event is not installed
Hello team ! After unsuccessful research on the Internet / Splunk doc, I am turning to you for my question: - Let's say I have 50 alerts in a single app, that are all stored in my file $SPLUNK_HO... See more...
Hello team ! After unsuccessful research on the Internet / Splunk doc, I am turning to you for my question: - Let's say I have 50 alerts in a single app, that are all stored in my file $SPLUNK_HOME$/etc/apps/<appname>/default/savedsearches.conf. - For version control / code management, I want to split this single savedsearches.conf into multiples savedsearches.conf files so that developers can work with a folder directory looking like this: | default | | - | alerts | | - | - | category_1_alerts | | - | - | category_1_alerts | savedsearches.conf | - | - | category_2_alerts | | - | - | category_2_alerts | savedsearches.conf ... - I tried without success on my Splunk instance. I don't know if it is possible, and if it this, I don't know if there are some statements to make in code (e.g. #include <filename>) Have a nice day PS :  In my version control / code management tool, I can always resort to concatenating all my files together when packaging Splunk code if I don't manage to find a better answer.
Currently I am using below query to extract the list of employee_ID column has less then 9 digit employee ID.  However, I have another requirement in same table to extract the employee ID with alphan... See more...
Currently I am using below query to extract the list of employee_ID column has less then 9 digit employee ID.  However, I have another requirement in same table to extract the employee ID with alphanumeric- (like N0001234, etc) and any special characters.  So overall we need data which is less than 9 digits, more than 9 digits, any alphanumeric characters, special characters. index=QQQQQ sourcetype="XXXXX*" source=TTTTTT Extension="*" MSID="*" Employee_Active="*" Employee_Id=* last_name="*" first_name="*"| rename Extension as DN| dedup Employee_Id | eval emplength=len(Employee_Id)| stats count by DN, MSID, Employee_Active, emplength,Employee_Id, last_name, first_name| where emplength>9 ] | table DN, MSID, Employee_Active, emplength,Employee_Id, last_name, first_name
I have a query that returns 2 values("A" and "B"), and a want to make a dinamic field exibition.  With "A" is bigger then "B" show "A" in green, with "A" is lower then "B" show in red. i manage to cr... See more...
I have a query that returns 2 values("A" and "B"), and a want to make a dinamic field exibition.  With "A" is bigger then "B" show "A" in green, with "A" is lower then "B" show in red. i manage to create dash's like that but with fixables values , in my case "B" is dynamic and came in the same query then "A"(using 2 query woud be fine with me too). how i can do that?     { "type": "splunk.singlevalue", "options": { "majorColor": "> majorValue | rangeValue(majorColorEditorConfig)" }, "dataSources": { "primary": "ds_N2TXpjLO" }, "context": { "majorColorEditorConfig": [ { "value": "#D41F1F", "to": "B" }, { "value": "#118832", "from": "B" } ] }, "showProgressBar": false, "showLastUpdated": false }  
When upgrading the Universal Forwarder using the .tgz on Mac OS , a pop up appears and states the following: The "DeRez" command requires the command line developer tools. Would you like to install... See more...
When upgrading the Universal Forwarder using the .tgz on Mac OS , a pop up appears and states the following: The "DeRez" command requires the command line developer tools. Would you like to install the tools now? If 'cancel' is selected, it appears not to affect anything, but I am unsure why this is happening. This appears to be happening when migrating the configuration when upgrading a Splunk UF version on Mac OS. What is the "DeRez" command and what is not being migrated when this is happening?   Thanks!     -- Migration information is being logged to '/Applications/splunkforwarder/var/log/splunk/migration.log.2023-02-01.10-15-52' -- This appears to be an upgrade of Splunk. --------------------------------------------------------------------------------) Splunk has detected an older version of Splunk installed on this machine. To finish upgrading to the new version, Splunk's installer will automatically update and alter your current configuration files. Deprecated configuration files will be renamed with a .deprecated extension. You can choose to preview the changes that will be made to your configuration files before proceeding with the migration and upgrade: If you want to migrate and upgrade without previewing the changes that will be made to your existing configuration files, choose 'y'. If you want to see what changes will be made before you proceed with the upgrade, choose 'n'. Perform migration and upgrade without previewing configuration changes? [y/n] y Migrating to: VERSION=9.0.2 BUILD=17e00c557dc1 PRODUCT=splunk PLATFORM=Darwin-universal It seems that the Splunk default certificates are being used. If certificate validation is turned on using the default certificates (not-recommended), this may result in loss of communication in mixed-version Splunk environments after upgrade. "/Applications/splunkforwarder/etc/auth/ca.pem": already a renewed Splunk certificate: skipping renewal "/Applications/splunkforwarder/etc/auth/cacert.pem": already a renewed Splunk certificate: skipping renewal [DFS] Performing migration. [DFS] Finished migration. [Peer-apps] Performing migration. [Peer-apps] Finished migration. Init script installed at /Library/LaunchDaemons//com.splunk.plist. Init script is configured to run at boot. Splunk> Another one. Checking prerequisites... Management port has been set disabled; cli support for this configuration is currently incomplete. Invalid key in stanza [webhook] in /Applications/splunkforwarder/etc/system/default/alert_actions.conf, line 229: enable_allowlist (value: false). Your indexes and inputs configurations are not internally consistent. For more information, run 'splunk btool check --debug' Checking conf files for problems... Done Checking default conf files for edits... Validating installed files against hashes from '/Applications/splunkforwarder/splunkforwarder-9.0.2-17e00c557dc1-darwin-universal2-manifest' PYTHONHTTPSVERIFY is set to 0 in splunk-launch.conf disabling certificate validation for the httplib and urllib libraries shipped with the embedded Python interpreter; must be set to "1" for increased security All installed files intact. Done All preliminary checks passed. Starting splunk server daemon (splunkd)... Done