All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi there, I've set up a dashboard with various columns, one of them outputs a  number field which has a comma(,) in it. I can remove the comma using the following command rex field=SurveyAnswers mod... See more...
Hi there, I've set up a dashboard with various columns, one of them outputs a  number field which has a comma(,) in it. I can remove the comma using the following command rex field=SurveyAnswers mode=sed "s/\,//g"  where SurveyAnswers is the table name. This works fine in a separate search, however the same command doesn't work when I try to update it in my dashboard and save. Any ideas ??? Thanks
Hello, I have a net 5 WebAPI deployed in a docker linux container. I want to instrument the application in AppDynamics. The application works and I test it everytime with swagger I've tried 2 impl... See more...
Hello, I have a net 5 WebAPI deployed in a docker linux container. I want to instrument the application in AppDynamics. The application works and I test it everytime with swagger I've tried 2 implementations Install the .NET Core Microservices Agent for Windows -> https://docs.appdynamics.com/4.5.x/en/application-monitoring/install-app-server-agents/net-agent/net-microservices-agent/install-the-net-core-microservices-agent-for-windows .NET Core for Linux SDK -> https://docs.appdynamics.com/21.5/en/application-monitoring/install-app-server-agents/net-agent/net-core-for-linux-sdk The first scenario didn't work for me I got a consistent error, a single log line saying "use clr profiler",  The second does nothing at all I have no feedback what so ever and no log is created This is my Docker configuration "Docker": { "commandName": "Docker", "launchBrowser": true, "launchUrl": "{Scheme}://{ServiceHost}:{ServicePort}/Swagger", "environmentVariables": { "CORECLR_PROFILER": "{57e1aa68-2229-41aa-9931-a6e93bbc64d8}", "CORECLR_ENABLE_PROFILING": "1", "CORECLR_PROFILER_PATH": "/app/bin/Debug/net5.0/runtimes/linux-64/native/libappdprofiler.so", "APPDYNAMICS_LOG_PATH": "/app/bin/Debug/net5.0" }, "publishAllPorts": true, "useSSL": true } And this is the app dynamics configuration { "feature": [ "FULL_AGENT" ], "controller": { "host": "MYHOST.saas.appdynamics.com", "port": 443, "account": "myAccount", "password": "mypassword", "ssl": true, "enable_tls12": true }, "application": { "name": "myapplicationname", "tier": "my-appliation-tier", "node": "" }, "log": { "directory": "/app/bin/Debug/net5.0", "level": "ALL" } }
Would love some guidance about what to look for in the logs
I configured the okta identity cloud for splunk App to ingest okta logs into splunk but getting the error message below: 2021-12-21 16:34:35,586 ERROR pid=1375 tid=MainThread file=base_modinput.py:l... See more...
I configured the okta identity cloud for splunk App to ingest okta logs into splunk but getting the error message below: 2021-12-21 16:34:35,586 ERROR pid=1375 tid=MainThread file=base_modinput.py:log_error:309 | Get error when collecting events. Traceback (most recent call last): File "/opt/splunk/etc/apps/TA-Okta_Identity_Cloud_for_Splunk/bin/ta_okta_identity_cloud_for_splunk/aob_py3/solnlib/modular_input/checkpointer.py", line 218, in get record = self._collection_data.query_by_id(key) File "/opt/splunk/etc/apps/TA-Okta_Identity_Cloud_for_Splunk/bin/ta_okta_identity_cloud_for_splunk/aob_py3/solnlib/packages/splunklib/client.py", line 3648, in query_by_id return json.loads(self._get(UrlEncoded(str(id))).body.read().decode('utf-8')) File "/opt/splunk/etc/apps/TA-Okta_Identity_Cloud_for_Splunk/bin/ta_okta_identity_cloud_for_splunk/aob_py3/solnlib/packages/splunklib/client.py", line 3618, in _get return self.service.get(self.path + url, owner=self.owner, app=self.app, sharing=self.sharing, **kwargs) File "/opt/splunk/etc/apps/TA-Okta_Identity_Cloud_for_Splunk/bin/ta_okta_identity_cloud_for_splunk/aob_py3/solnlib/packages/splunklib/binding.py", line 289, in wrapper return request_fun(self, *args, **kwargs) File "/opt/splunk/etc/apps/TA-Okta_Identity_Cloud_for_Splunk/bin/ta_okta_identity_cloud_for_splunk/aob_py3/solnlib/packages/splunklib/binding.py", line 71, in new_f val = f(*args, **kwargs) File "/opt/splunk/etc/apps/TA-Okta_Identity_Cloud_for_Splunk/bin/ta_okta_identity_cloud_for_splunk/aob_py3/solnlib/packages/splunklib/binding.py", line 679, in get response = self.http.get(path, all_headers, **query) File "/opt/splunk/etc/apps/TA-Okta_Identity_Cloud_for_Splunk/bin/ta_okta_identity_cloud_for_splunk/aob_py3/solnlib/packages/splunklib/binding.py", line 1183, in get return self.request(url, { 'method': "GET", 'headers': headers }) File "/opt/splunk/etc/apps/TA-Okta_Identity_Cloud_for_Splunk/bin/ta_okta_identity_cloud_for_splunk/aob_py3/solnlib/packages/splunklib/binding.py", line 1244, in request raise HTTPError(response) solnlib.packages.splunklib.binding.HTTPError: HTTP 503 Service Unavailable -- KV Store is initializing. Please try again later. During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/opt/splunk/etc/apps/TA-Okta_Identity_Cloud_for_Splunk/bin/ta_okta_identity_cloud_for_splunk/aob_py3/modinput_wrapper/base_modinput.py", line 128, in stream_events self.collect_events(ew) File "/opt/splunk/etc/apps/TA-Okta_Identity_Cloud_for_Splunk/bin/okta_identity_cloud.py", line 68, in collect_events input_module.collect_events(self, ew) File "/opt/splunk/etc/apps/TA-Okta_Identity_Cloud_for_Splunk/bin/input_module_okta_identity_cloud.py", line 774, in collect_events lastTs = helper.get_check_point((cp_prefix + ":" + opt_metric + ":lastRun")) File "/opt/splunk/etc/apps/TA-Okta_Identity_Cloud_for_Splunk/bin/ta_okta_identity_cloud_for_splunk/aob_py3/modinput_wrapper/base_modinput.py", line 521, in get_check_point return self.ckpt.get(key) File "/opt/splunk/etc/apps/TA-Okta_Identity_Cloud_for_Splunk/bin/ta_okta_identity_cloud_for_splunk/aob_py3/solnlib/utils.py", line 159, in wrapper return func(*args, **kwargs) File "/opt/splunk/etc/apps/TA-Okta_Identity_Cloud_for_Splunk/bin/ta_okta_identity_cloud_for_splunk/aob_py3/solnlib/modular_input/checkpointer.py", line 222, in get 'Get checkpoint failed: %s.', traceback.format_exc(e)) File "/opt/splunk/lib/python3.7/traceback.py", line 167, in format_exc return "".join(format_exception(*sys.exc_info(), limit=limit, chain=chain)) File "/opt/splunk/lib/python3.7/traceback.py", line 121, in format_exception type(value), value, tb, limit=limit).format(chain=chain)) File "/opt/splunk/lib/python3.7/traceback.py", line 508, in __init__ capture_locals=capture_locals) File "/opt/splunk/lib/python3.7/traceback.py", line 337, in extract if limit >= 0: TypeError: '>=' not supported between instances of 'HTTPError' and 'int'
Hello,  My Splunk query an API and gets a JSON answer. Here is a sample for 1 Host (the JSON answer is very long ≈ 400 hosts) :  {     "hosts": [         {             "hostInfo": {          ... See more...
Hello,  My Splunk query an API and gets a JSON answer. Here is a sample for 1 Host (the JSON answer is very long ≈ 400 hosts) :  {     "hosts": [         {             "hostInfo": {              "displayName": "host1.fr"                                    },              "modules": [                 {                     "moduleType": "JAVA",                     "instances": [                         {                             "Instance Name": "Test1",                             "moduleVersion": "1.0"                         },                         {                             "Instance Name": "Test2",                             "moduleVersion": "1.1"                         },                         {                             "Instance Name": "Test3",                             "moduleVersion": "1.2"                         }                       ]                   }               ]           }        ] }   First-of-all I have to manually parse this JSON because SPLUNK automatically gets the 1st fields of the 1st host only.   With this following search, I manually parse this JSON all the way through the "instances{}" array and I count the number of moduleVersion : index="supervision_software" source="API" earliest=-1m | spath path=hosts{}.modules{}.instances{} output=host | fields - _raw | mvexpand host | spath input=host | stats count(moduleVersion) It displays a number of 1277 moduleVersion and it is the right number. On the other hand with the next similar search, when I parse the JSON starting only to the 1st array ("hosts{}"), I am getting a different number of moduleVersion  : index="supervision_software" source="API" earliest=-1m | spath path=hosts{} output=host | fields - _raw | mvexpand host | spath input=host | stats count(modules{}.instances{}.moduleVersion) It displays a number of 488 moduleVersion but it's incorrect. Why is there a difference ? Thank you. Best regards,
I'm trying to figure out how to show uptime percent of a device in percentage over 30 days that is agnostic to both linux and windows data.   I am currently using index=os sourcetype=Unix:Uptime ... See more...
I'm trying to figure out how to show uptime percent of a device in percentage over 30 days that is agnostic to both linux and windows data.   I am currently using index=os sourcetype=Unix:Uptime as my data set, and it's a default data set that ships with the Linux TA.  for windows I am using this search: index=wineventlog LogName=System EventCode=6013 |rex field=Message "uptime is (?<uptime>\d+) seconds" | eval Uptime_Minutes=uptime/60 | eval LastBoot=_time-uptime | convert ctime(LastBoot) | eval uptime=tostring(uptime, "duration") | stats latest(_time) as time by host, Message, uptime, LastBoot   Currently, I can't figure out how to account for a reboot that occurs during the month.  The linux data doesn't have a 'LastBoot' field like the windows data, and I'm not sure how to create one.  This is the closest that I've gotten is to use something like this for either linux or windows, and simply rename / create the 'uptime' field in seconds.  index=nix sourcetype=Unix:Uptime | rename SystemUpTime as uptime | streamstats sum(uptime) as total by host | eval tot_up=(total/157697280)*100 | eval host_uptime=floor(tot_up) | stats max(host_uptime) as pctUp by host This is obviously crude, and I'm trying to refine it though i'm looking for any help. I'm obviously missing something, and i'm sure i'm not the first person to ask a question like this though I couldn't find anything specific to this on answers.  I have a search that shows me total uptime in duration for either windows or linux, and that's great!  I'm just looking for the total uptime in percent over a 30 days span that accounts for reboots, or legitimate system hard down incidents. 
We've gotten a search to work that shows the delta between the number of messages in an inbox for a period of time:   <basesearch> | bin _time span=5m | stats max(Items) AS Max by _time | delta Max... See more...
We've gotten a search to work that shows the delta between the number of messages in an inbox for a period of time:   <basesearch> | bin _time span=5m | stats max(Items) AS Max by _time | delta Max as Delta | fields _time Delta | where Delta>=10   But I want to do this based on multiple inboxes, and delta is merging the inboxes together, so the values of each inbox are interfering with each other.   <basesearch multiple mailboxes> | bin _time span=5m | stats max(Items) AS Max by _time User | delta Max as Delta | fields _time Delta User   returns: _time User Max Delta 09:15 user1 103   09:15 user2 251 148 09:15 user3 17 -234 and I want the users to be treated as individual accounts, not merged with each other.  I assume I need to use streamstats for this, but so far I've been unable to work out how.
Learning about joins and sub searches. What's the following query executing and would there be a way to make it more efficient? index=old_indexstats count values(d) as d by username | join type=in... See more...
Learning about joins and sub searches. What's the following query executing and would there be a way to make it more efficient? index=old_indexstats count values(d) as d by username | join type=inner username [search index=new_index | stats count by username ] I believe it starts by searching and counting usernames in the new index however, am getting mixed up after that.
Hi All, I have a code, that uses the output to fetch data from another Panel. First Panel   <title>Juniper Mnemonics</title> <table> <search> <query>index=nw_syslog | sear... See more...
Hi All, I have a code, that uses the output to fetch data from another Panel. First Panel   <title>Juniper Mnemonics</title> <table> <search> <query>index=nw_syslog | search hostname="*DCN*" | stats count by cisco_mnemonic, hostname | sort - count</query> <earliest>$field1.earliest$</earliest> <latest>$field1.latest$</latest> </search> <option name="drilldown">row</option> <option name="refresh.display">progressbar</option> <drilldown> <condition field="cisco_mnemonic"> <set token="message_token">$click.value$</set> </condition> <condition field="hostname"> <set token="hostname_token">$click.value$</set> </condition> <condition field="count"></condition> </drilldown> </table>     From this panel 2 contents are fetched for second panel search. Second Panel   index=nw_syslog | search hostname="*DCN*" | search cisco_mnemonic="$message_token$" | search hostname="$hostname_token$" | stats count by message | sort - count     Issue:  When ever i click the first panel table.( given ROW as Click Selection). its not getting fetching correctly. Only fetching "cisco_mnemonic" only for both cisco_mnemonic and hostname. Please guide me how can i get both in single click.    
I have 2 sourcetypes, vpn & winevents, how do you write a single query to get winevents of the top 5 busiest machines of IP X (1 IP is used by many users). The vpn sourcetype contains both hostname &... See more...
I have 2 sourcetypes, vpn & winevents, how do you write a single query to get winevents of the top 5 busiest machines of IP X (1 IP is used by many users). The vpn sourcetype contains both hostname & IP, while the winevents only contains the hostname. I'm assuming I'd utilize the append command and a sub search sourcetype=winevents | append [search sourcetype=vpn] | top limit=5  Any help is appreciated, thanks
We are getting the below error while executing the query.  com.microsoft.sqlserver.jdbc.SQLServerException: The value is not set for the parameter number 1.   Kindly advise on the r... See more...
We are getting the below error while executing the query.  com.microsoft.sqlserver.jdbc.SQLServerException: The value is not set for the parameter number 1.   Kindly advise on the resolution. 
Hi, I need help in evaluation the csv files under "<Splunk directory>\etc\apps\search\lookups" folder. we have multiple csv files in this folder and I need to check which csv file is not in use or u... See more...
Hi, I need help in evaluation the csv files under "<Splunk directory>\etc\apps\search\lookups" folder. we have multiple csv files in this folder and I need to check which csv file is not in use or used for which search so that unused csv file can be deleted.
Hello Team We have been using  Splunk app for Jenkins App in our environment and we have seperate dashboard for Jobinsight with in the App. With in the Jobinsight  Dashboard we have panel for "Lates... See more...
Hello Team We have been using  Splunk app for Jenkins App in our environment and we have seperate dashboard for Jobinsight with in the App. With in the Jobinsight  Dashboard we have panel for "Latest Build" which display the value for Build.  Expectation is when the value is clicked in Panel it suppose to display the build results by taking the "Click_value"  as variable. However we see that is value is not passing while clicking on the Panel. Code in the respective page:  $click.value$ value is not passing. <link>\n<![CDATA[build?type=build&host=$host$&job=$jobName$&build=$click.value$]]>\n </link> In the dashboard URL : splunk_app_jenkins/build?type=build&host=JENKINS-F1DEVOPS-PRDINTRANET-IE&job=F1_ALL_DEV_SML/f1-all-dev-sml/feature%2FGDS-641_lint_tools&build=$click.value$
Hello Guys,  We have to integrate one of the SQL server with Splunk and the current version is  SQL 2014. We are using splunk db connect app to configure it.  Kindly confirm if SQL team can upgrade... See more...
Hello Guys,  We have to integrate one of the SQL server with Splunk and the current version is  SQL 2014. We are using splunk db connect app to configure it.  Kindly confirm if SQL team can upgrade to 2017 , is it compatible with splunk db connect or do they need to upgrade it to SQL 2019 ?  Provide any solutions/documents on this .
Hi Community, I'm currently facing a concern around the Health Rule Violations API returning less information in the "description" field since my company updated the controller from version 4.5.16.2... See more...
Hi Community, I'm currently facing a concern around the Health Rule Violations API returning less information in the "description" field since my company updated the controller from version 4.5.16.2272 to 20.7.2-2909.  Here is the api result comparison between between both version and the deepLinkUrl output: 4.5.16.2272: "description": "description": "AppDynamics has detected a problem with Application <b>APP-1</b>.<br><b>Service Availability</b> continues to violate with <b>critical</b>.<br>All of the following conditions were found to be violating<br>For Application <b>APP-1</b>:<br>1) API-PORTAL<br><b>Availability's</b> value <b>2.00</b> was <b>less than</b> the threshold <b>3.00</b> for the last <b>30</b> minutes<br>" 20.7.2-2909: "description": "AppDynamics has detected a problem.<br><b>Business Transaction Health</b> is violating." I'm wondering if anything was misconfigured related to the health alert or is there a way to fine tune the alert to show a detailed version. Thank you ! 
index name = my_index source name = my_source sourcetype = my_sourcetpye host = 192.168.0.10 ----------------------------- The field action is =allow -> my_allow. Action = deny -> my_deny othe... See more...
index name = my_index source name = my_source sourcetype = my_sourcetpye host = 192.168.0.10 ----------------------------- The field action is =allow -> my_allow. Action = deny -> my_deny other -> my_myontype I want to change it to this. help me
Is it possible to send alert log from FireEye CM (central management) to FireEye App for Splunk ?
Hi, From the below events i need to extract the field called "Event_Name" which is associated with "BeyondTrust_PBUL_ACCEPT_Event" from below 3 events Desired output: Event_Name(filed name)=BeyondT... See more...
Hi, From the below events i need to extract the field called "Event_Name" which is associated with "BeyondTrust_PBUL_ACCEPT_Event" from below 3 events Desired output: Event_Name(filed name)=BeyondTrust_PBUL_ACCEPT_Event(field value) Example Event 1: <86>Dec 22 ddppvc0729 pbmerd2.1.0-12: BeyondTrust_PBUL_ACCEPT_Event: Time_Zone='IST'; Request_Date='2021/1/27'; Request_Time='2:2:51'; Request_End_Date='2021/1/27'; Request_End_Time='22:1:51';Submit_User='spnt'; Submit_Host='wcpl.com';  Example Event 2: <83>Dec 22 ddpc0729 pbmerd21.1.0-12: [2658] 5105.1 failed to get ACK packet during a CMD_SWAPTTY_ONE_LINE sequence - read failure in receive acknowledgement Example Event 3: <38>Dec 22 ddppvc0729 root[25132]: [ID 7011 auth.info] CEF:0|BeyondTrust|PowerBroker|1.1.0-12|7011|PBEvent=Accept|4|act=Accept end=Dec 1 2021 1:11:40 shost=dc8 dvchost=dc8 suser=t8adsfk duser=root filePath=/opt/ cs1Label=Ticket cs1=Not_Applicable deviceExternalId=0a2adfersds9 fname=./SSB_Refresh_Pbrun_Local_Policy_Files.sh   What i tried from regex extraction: Input: (?<Event_Name>\w{10}[a-zA-Z]+_[a-zA-Z]+_[a-zA-Z]+_[a-zA-Z]+) Output: matching 2 places from above 3 events      
Hi Everyone, I'm running Splunk Enterprise 8.2.2.1 on my MacOS (Big Sur), and it runs quite well, except that there is no search history available using a user id with admin role. But from the ... See more...
Hi Everyone, I'm running Splunk Enterprise 8.2.2.1 on my MacOS (Big Sur), and it runs quite well, except that there is no search history available using a user id with admin role. But from the CLI in: etc/users/bd/search/history There is actually a file called <hostname>.idx.csv which holds all my history. 1. Can anyone please explain what's going on here? PS. I have 5 instances running on my Mac (A combined SH/IDX, DPL, HFWD, and 2 UF's), and it all works nice together. The difference is that I have an internal created user on the SH (the one with no history above), but on IE the HFWD I use the user "splunk" (this user also runs all the instances on OS level) to log in with, and here history work just fine. 2. There is gotta be a missing link, but which? Cheers, Bjarne