All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello Splunkers!  Please find sample Log attached, in this UserId available. Based on this log need Splunk query to create dashboard/search query to get output. 1. The number of user logins o... See more...
Hello Splunkers!  Please find sample Log attached, in this UserId available. Based on this log need Splunk query to create dashboard/search query to get output. 1. The number of user logins on a Daily, Weekly or Monthly basis. (need a query for this) 2. The number of internal vs external user login trend. (need a query for this) 3. Peak user login time of the day. (need a query for this) 4. Peak user login day of the week. (need a query for this) 5. Average time spent on the Platform by Users. (need a query for this)
I have two queries. First one has multiple fields: source, IP, comment & cIP and this is exported CSV as a output lookup table. Second query also has source & IP which is used for comparison. It will... See more...
I have two queries. First one has multiple fields: source, IP, comment & cIP and this is exported CSV as a output lookup table. Second query also has source & IP which is used for comparison. It will have also have comment which needs to be appended with values from the first query. It will also have other fields nIP, logN. The second query needs to compare values of source & IP from the lookup table and retrieve fields comment & cIP from the look up table & output. Ex. Query 1 output -> lookup table: source IP Comment cIP xyz.com 111.111.111.111 table1 1 xyz.com 111.111.111.112 table1 2 abc.com 111.111.111.111 table1 1   Query 2 Fields: source IP Comment nIP logN xyz.com 111.111.111.111 table2 2 951 xyz.com 111.111.111.112 table2 3 751 qwe.com 255.255.255.001 table2 2 152   Final Output source IP Comment cIP nIP logN xyz.com 111.111.111.111 table1 table2 1 2 951 xyz.com 111.111.111.112 table1 table2 2 3 751 abc.com 111.111.111.111 table1 1 - -   Query 2 contains more data than Query 1, I only need to include source & IP that are output from Query 1 only. For example, in Query 2, there is qwe.com which is not in Query 1 so that is not in the Final Output but abc.com is.   I am currently doing append:   index=* AND source=*.log | dedup source IP | fields source IP Comment nIP logN | append [search index=app* earliest=-15m | eval host=IP | dedup source IPNAME | table SRVID IPNAME comment cIP] | stats values(comment) as comment, values(nIP) as nIP, values(logN) as logN by IPNAME,SRVID | where comment=="table1"
Hi, I just installed this Add-on (4.6.1) on one of our Heavy Forwarders (7.3.1.1) in QA. I can not browse the application as it keeps loading on Inputs and Configuration tabs.  The logs  ... See more...
Hi, I just installed this Add-on (4.6.1) on one of our Heavy Forwarders (7.3.1.1) in QA. I can not browse the application as it keeps loading on Inputs and Configuration tabs.  The logs    index=_internal source="/opt/splunk/var/log/splunk/splunkd.log" sourcetype=splunkd ERROR aws   displays    07-09-2020 18:35:02.158 +0200 ERROR AdminManagerExternal - Unexpected error "<class 'splunk.AuthorizationFailed'>" from python handler: "[HTTP 403] Client is not authorized to perform requested action; https://127.0.0.1:8089/servicesNS/nobody/Splunk_TA_aws/configs/conf-server/sslConfig". See splunkd.log for more details. host = sto-splunk-qa-hf12.bde.localsource = /opt/splunk/var/log/splunk/splunkd.logsourcetype = splunkd 09/07/2020 18:35:02.158 07-09-2020 18:35:02.158 +0200 ERROR AdminManagerExternal - Stack trace from python handler:\nTraceback (most recent call last):\n File "/opt/splunk/lib/python2.7/site-packages/splunk/admin.py", line 88, in init_persistent\n hand = handler(mode, ctxInfo, data)\n File "/opt/splunk/etc/apps/Splunk_TA_aws/bin/aws_sqs_inputs_rh.py", line 28, in __init__\n **kwargs\n File "/opt/splunk/etc/apps/Splunk_TA_aws/bin/base_input_rh.py", line 44, in __init__\n self._service = LocalServiceManager(app=tac.splunk_ta_aws, session_key=self.getSessionKey()).get_local_service()\n File "/opt/splunk/etc/apps/Splunk_TA_aws/bin/splunk_ta_aws/common/local_manager.py", line 14, in __init__\n enable_ssl = self._get_entity('configs/conf-server', 'sslConfig').get('enableSplunkdSSL')\n File "/opt/splunk/etc/apps/Splunk_TA_aws/bin/splunk_ta_aws/common/local_manager.py", line 28, in _get_entity\n return entity.getEntity(path, name, sessionKey=self._session_key, namespace=self._app, owner=self._owner)\n File "/opt/splunk/lib/python2.7/site-packages/splunk/entity.py", line 265, in getEntity\n serverResponse, serverContent = rest.simpleRequest(uri, getargs=kwargs, sessionKey=sessionKey, raiseAllErrors=True)\n File "/opt/splunk/lib/python2.7/site-packages/splunk/rest/__init__.py", line 536, in simpleRequest\n raise splunk.AuthorizationFailed(extendedMessages=uri)\nAuthorizationFailed: [HTTP 403] Client is not authorized to perform requested action; https://127.0.0.1:8089/servicesNS/nobody/Splunk_TA_aws/configs/conf-server/sslConfig\n   Is it some kind of SSL issues somehow ? (I just installed the app and did nothing else).  I can not figure out why it does not work as expected.
  I'm using the Cisco ESA add-on (https://splunkbase.splunk.com/app/1761/)  The documentation references files which need to be monitored by adding monitor stanzas to inputs.conf (example pasted be... See more...
  I'm using the Cisco ESA add-on (https://splunkbase.splunk.com/app/1761/)  The documentation references files which need to be monitored by adding monitor stanzas to inputs.conf (example pasted below). [monitor://<Cisco_Ironport_LOG_PATH>\mail.@20130712T172736.s] sourcetype = cisco:esa:textmail  However, I do not have a single file with only Cisco ESA logs. I have a syslog server which receives logs from numerous devices (routers, switches, firewalls) and puts them into a single log file. This log file is then processed by a Splunk UF and sent to Splunk Cloud. How do I extract and parse Cisco ESA logs from this file using the ESA add-on? I'm assuming that creating a monitor stanza with the common log file with categorize everything as "cisco:esa:textmail".  
Hello,  GOAL: determine if application server has logged based on a list of application ID codes I have 2 csv lookups  Applicationlist.csv   - contains: appID , appNAME Applicationlist.csv is a s... See more...
Hello,  GOAL: determine if application server has logged based on a list of application ID codes I have 2 csv lookups  Applicationlist.csv   - contains: appID , appNAME Applicationlist.csv is a subset of all applications listed in Applicationmetadata.csv. appID appName 5 application_five 24 application_twentyfour 35 application_thrityfive 120 application_onehundtwnty     Applicationmetadata.csv - contains: applicationID, applicationcode, appServerhostname , appServerIP,  applicationID applicationcode appServerhostname appServerIP   1 app1 webapp101  1.2.3.101   1 app1 webapp1 1.2.3.1   2 app2 sql46 1.2.4.5   5 app5 sql234  1.2.5.67   5 app5 apach32 1.2.5.6   24 app24 webapp98 1.2.5.98   29 app29 sql678 1.4.5.6   35 app35 webapp35 1.7.8.99   35 app35 sql909 1.7.8.9   120 app120 rsatsl 1.8.9.0   *appID = applicationID. same data, different field name in each csv * each applciation ususlly has more than 1 server  The index being refereed to collects not appserver logs, but server logging data. For example, the original index that the appserver logged to.  Desired results:   ..... appmetadata, index(es)/sourcetype(s), Sorted by appID appID appNAME  appServerhostname original_index original_sourcetype   5 app_5 hostname_5 index1 sourcetype_a   24 app_24 hostname_24 index9 sourcetype_x   35 app_35 hostname_35 index11 sourcetype_z   120  app_120 hostname_120 index2 sourcetype_b   * original_index and original_sourcetype are fields in the index that are linked  to appServerhostname. Thanks in advance for your help.   
Hello everyone I am very new to splunk.  I have created an app in splunk in which there's a Dashboard which fetches me an XML whenever I enter the order number, now the result I am getting is fine, ... See more...
Hello everyone I am very new to splunk.  I have created an app in splunk in which there's a Dashboard which fetches me an XML whenever I enter the order number, now the result I am getting is fine, but the the output XML is not in proper format. it comes in asingle line, is there a way to manipulate my result in a proper format? for example.  i want it like  orderStatusReply xmlns="blabla.blabla" transactionId="blabla" timestamp="2020-07-09T15:49:36.649Z"> <result> <resultType>success</resultType> </result> <orderStatus type="history"> <customer mailConfirmationRequired="false"> <customerNumber>blabla blabla</customerNumber> <customerName>blabla</customerName> <printReceipt>blabla</printReceipt> </customer> <centerNumber>blabla</centerNumber> <containsItemPaymentAdjustment>false</containsItemPaymentAdjustment> <orderNumber>1111111111111111</orderNumber> <referenceNumber>11111111111111111111</referenceNumber> .. ... but it comes like  orderStatusReply xmlns="blabla.blabla" transactionId="blabla" timestamp="2020-07-09T15:49:36.649Z"><result><resultType>success</resultType></result><orderStatus type="history"><customer mailConfirmationRequired="false"><customerNumber>blabla blabla</customerNumber><customerName>blabla</customerName><printReceipt>blabla</printReceipt> </customer><centerNumber>blabla</centerNumber><containsItemPaymentAdjustment>false</containsItemPaymentAdjustment<orderNumber>1111111111111111</orderNumber><referenceNumber>11111111111111111111</referenceNumber>
I am using version 1.1.0, the Log file doesnt show any error but I could it tries to get every 2 seconds, where the polling is 60 seconds. 2020-07-09 10:37:15,846 DEBUG pid=14036 tid=MainThread file... See more...
I am using version 1.1.0, the Log file doesnt show any error but I could it tries to get every 2 seconds, where the polling is 60 seconds. 2020-07-09 10:37:15,846 DEBUG pid=14036 tid=MainThread file=connectionpool.py:_new_conn:809 | Starting new HTTPS connection (1): reports.office365.com 2020-07-09 10:37:17,849 DEBUG pid=14036 tid=MainThread file=connectionpool.py:_make_request:400 | https://reports.office365.com:443 "GET /ecp/reportingwebservice/reporting.svc/MessageTrace?$filter=StartDate%20eq%20datetime'2020-07-09T07%3A22%3A05.982209Z'%20and%20EndDate%20eq%20datetime'2020-07-09T07%3A52%3A05.982209Z'&$skiptoken=3999 HTTP/1.1" 200 51899 2020-07-09 10:37:17,882 DEBUG pid=14036 tid=MainThread file=base_modinput.py:log_debug:286 | Next URL is https://reports.office365.com/ecp/reportingwebservice/reporting.svc/MessageTrace?$filter=StartDate%20eq%20datetime'2020-07-09T07%3A22%3A05.982209Z'%20and%20EndDate%20eq%20datetime'2020-07-09T07%3A52%3A05.982209Z'&$skiptoken=5999 2020-07-09 10:37:17,883 DEBUG pid=14036 tid=MainThread file=base_modinput.py:log_debug:286 | Endpoint URL: https://reports.office365.com/ecp/reportingwebservice/reporting.svc/MessageTrace?$filter=StartDate%20eq%20datetime'2020-07-09T07%3A22%3A05.982209Z'%20and%20EndDate%20eq%20datetime'2020-07-09T07%3A52%3A05.982209Z'&$skiptoken=5999 2020-07-09 10:37:17,883 INFO pid=14036 tid=MainThread file=setup_util.py:log_info:114 | Proxy is not enabled! 2020-07-09 10:37:17,884 DEBUG pid=14036 tid=MainThread file=connectionpool.py:_new_conn:809 | Starting new HTTPS connection (1): reports.office365.com 2020-07-09 10:37:19,820 DEBUG pid=14036 tid=MainThread file=connectionpool.py:_make_request:400 | https://reports.office365.com:443 "GET /ecp/reportingwebservice/reporting.svc/MessageTrace?$filter=StartDate%20eq%20datetime'2020-07-09T07%3A22%3A05.982209Z'%20and%20EndDate%20eq%20datetime'2020-07-09T07%3A52%3A05.982209Z'&$skiptoken=5999 HTTP/1.1" 200 50410 2020-07-09 10:37:19,852 DEBUG pid=14036 tid=MainThread file=base_modinput.py:log_debug:286 | Next URL is https://reports.office365.com/ecp/reportingwebservice/reporting.svc/MessageTrace?$filter=StartDate%20eq%20datetime'2020-07-09T07%3A22%3A05.982209Z'%20and%20EndDate%20eq%20datetime'2020-07-09T07%3A52%3A05.982209Z'&$skiptoken=7999 2020-07-09 10:37:19,853 DEBUG pid=14036 tid=MainThread file=base_modinput.py:log_debug:286 | Endpoint URL: https://reports.office365.com/ecp/reportingwebservice/reporting.svc/MessageTrace?$filter=StartDate%20eq%20datetime'2020-07-09T07%3A22%3A05.982209Z'%20and%20EndDate%20eq%20datetime'2020-07-09T07%3A52%3A05.982209Z'&$skiptoken=7999
Hello, We had a power outage after which our main Splunk instance (which serves as a Search Head and an Indexer) went offline, our Universal Forwarders installed on 2 Windows DC had both a huge memo... See more...
Hello, We had a power outage after which our main Splunk instance (which serves as a Search Head and an Indexer) went offline, our Universal Forwarders installed on 2 Windows DC had both a huge memory usage after this causing one of these 2 hosts to crash. I think the inability to forward events properly caused queues to fill, but checking output.conf file shows an existing reduction setting of their size limit : "maxQueueSize = 100KB" (default is 500KB). My questions are : 1. What other steps can I take to prevent this ? (I prefer losing logs than having a critical host going offline). 2. Can I set a global memory usage limit for the Forwarders (like 2GB) ? 3. Could this be related to the Forwarders version (7.2.3) ? I thank you in advance for your support. Regards.
Hi at all, I have to ingest a csv file where some fields are multivalue and multiline, something like this:       FIELD1;FIELD2;FIELD3;FIELD4;FIELD5 xxxx;yyyyy;"ppp ";"qqq asd asd ert www";qwert... See more...
Hi at all, I have to ingest a csv file where some fields are multivalue and multiline, something like this:       FIELD1;FIELD2;FIELD3;FIELD4;FIELD5 xxxx;yyyyy;"ppp ";"qqq asd asd ert www";qwerty       How can I do it? I tried in many ways but it fails everytime. Ciao and Thanks. Giuseppe
Hi, We have a DynamoDB extension that has ReadThrottleEvent, but so far we don't have any WriteThrottleEvent or a ThrottleRequest. To avoid the question and headache we have metrics but they are n... See more...
Hi, We have a DynamoDB extension that has ReadThrottleEvent, but so far we don't have any WriteThrottleEvent or a ThrottleRequest. To avoid the question and headache we have metrics but they are not being monitored. Is there a way to set up health rules using regex for discovered metrics? Or any way to alarm when these metrics showed up? Thanks
Hello, I would like to set up statistics on the visited websites by the users. I would like to find all users who visited online shopping websites.  However, i have to exclude all the links related t... See more...
Hello, I would like to set up statistics on the visited websites by the users. I would like to find all users who visited online shopping websites.  However, i have to exclude all the links related to advertising.   So how can i exclude the logs related to advertising so that I can measure the real number of visits to these shopping websites?  If this is difficult to measure, another idea can be counting only the users who logged in their personal account in these shopping websites, how do I specify this? I hope my question is clear, I am very new in Splunk.. example: sourcetype=MWGaccess3 urlc="Online Shopping" | top limit=15 user
Hi, My issue is : I want to create a field from random data string (always the same) which is not present in all logs. The objective is to create a table with a first column which lists all value o... See more...
Hi, My issue is : I want to create a field from random data string (always the same) which is not present in all logs. The objective is to create a table with a first column which lists all value of "log-group" field AND a second column with binary value which is to say if the random string is present in the log so 1 if not 0  :  log-group string presence test1  1 test2  0 test3 1 For example, my log :  log-group=test1 2020-07-09 13:28:38 [pool] INFO test : received from test analytics.measure.record topic: 0 objects random data string log-group=test2 2020-07-09 13:28:38 [pool] INFO test : received from test analytics.measure.record topic: 0 objects  log-group=test3 2020-07-09 13:28:38 [pool] INFO test : received from test analytics.measure.record topic: 0 objects random data string How can I do that ? 
I'm trying to apply CIM model on User activity data. E.g. Session Activities,Process Activities,Network Activities   Which data model ( CIM ) best fit for this type of data ?    P.S. I find Endpo... See more...
I'm trying to apply CIM model on User activity data. E.g. Session Activities,Process Activities,Network Activities   Which data model ( CIM ) best fit for this type of data ?    P.S. I find Endpoint Data Model useful. is it correct data model ?
How would I get from the first output to the final output?     First Output Finaloutput
Hi, I've deployed Splunk Forwarder on my machine and noticed it is installing an older version of OpenSSL (1.0.2t). Is it possible to use a newer version of OpenSSL? EOL for 1.0.2 was in 2019.. As... See more...
Hi, I've deployed Splunk Forwarder on my machine and noticed it is installing an older version of OpenSSL (1.0.2t). Is it possible to use a newer version of OpenSSL? EOL for 1.0.2 was in 2019.. Assuming we are not using SSL, can I remove OpenSSL from the installation? Will the forwarder still work? Thanks!
Hi, We have a jenkins pipeline and I am writing a query to visualize duration of various stages across different builds. When using jenkins splunk plugin, its giving some values which arent actually... See more...
Hi, We have a jenkins pipeline and I am writing a query to visualize duration of various stages across different builds. When using jenkins splunk plugin, its giving some values which arent actually stage names. If I try to remove those out in search itself, it returns empty data as all stages seem to be in a single statistics raw event. If i later try to use where on it that too doesnt work. Sort also doesnt work on numerice duration values  Query I am using is :   index=dx_campaign_utf_jenkins_statistics host="<hostname>" event_tag=job_event type=completed job_name=orchestration-pipeline "stages{}.name"="*" | rename stages{}.duration as stageduration | rename stages{}.name as stageName | table stageName, stageduration, build_number     Query returns result like : How do i remove stages like /apps, com ones ?
Hi All, I am looking for a cron expression NOT to trigger alert for a particular period of time on daily basis. Alert is scheduled to run for every 10 minutes. 1:00 AM to 1:15 AM 2:00 AM to 2:15 A... See more...
Hi All, I am looking for a cron expression NOT to trigger alert for a particular period of time on daily basis. Alert is scheduled to run for every 10 minutes. 1:00 AM to 1:15 AM 2:00 AM to 2:15 AM Kindly help me.
Hi Team, Is it possible to include a new customised range in Single value other than the default range(low, elevated, guarded,etc). We want to customise the range options to include a new range in o... See more...
Hi Team, Is it possible to include a new customised range in Single value other than the default range(low, elevated, guarded,etc). We want to customise the range options to include a new range in our single value. Thanks in advance.
i have date like below. User                        Points gain a                                 1004 b                                  900 c                                  850 d            ... See more...
i have date like below. User                        Points gain a                                 1004 b                                  900 c                                  850 d                                  700 e                                  600 i want to create new column based on Points gain like if User got > 1000 then Expert, 850 to 1000 then Master, 700 to 850 average , <700 Slow    Thanks
Hi Team,  I was trying to set an alert from Appdynamics for WebLogic Managed Server Nodes. The requirement is that the alert must be sent if any of the managed server nodes of WebLogic is in war... See more...
Hi Team,  I was trying to set an alert from Appdynamics for WebLogic Managed Server Nodes. The requirement is that the alert must be sent if any of the managed server nodes of WebLogic is in warning, failed, or shutdown state. However, I could not find any option to set it. Can you please confirm if any such option is there in Appdynamics? Regards, Mohini Upasani