All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi community,  I have 2 different lists with fields as follow : list A - ip_address, source, account_id list B - ip_address, source, account_id, field4, field5 I want to compare both lists to acc... See more...
Hi community,  I have 2 different lists with fields as follow : list A - ip_address, source, account_id list B - ip_address, source, account_id, field4, field5 I want to compare both lists to accomplish list(B) - list(A), ie. remain only list(B) entries with unique ip_address value, comparing to list(A) entries, while also return the field value of field4 and field5.  Example  list A ip_address source account_id 10.0.0.1 A 1000 192.168.0.1 A 1001   list B  ip_address source account_id field4 field5 10.0.0.2 B 999 xxx yyyy  192.168.0.1 B 1001 xxy yyyx   Result ip_address source account_id field4 field5 10.0.0.2 B 999 xxx yyyy    I have tried the following : index=seceng source="listB" | eval source="B" | fields ip_address source account_id field4 field5 | append [ | inputlookup listA   | eval source="A"   | fields ip_address source account_id] | stats values(source) as source, count by ip_address account_id field4 field5 | where count == 1 AND source == "B" The issue of this query is that since field4 and field5 are unique attributes for list(B) only, thus the stats query will only return list(B) entries. It works when the field4 and field5 removed from the stats query, but they are the attributes that I want to include to the result. Can anyone give me suggestion of how the expected result can be accomplished ? Really appreciate that, and thanks in advance !
Hello splunkers, I need your help to find a solution for the following issue. I have a log file as a source that I'm indexing as metrics Sample Event   2022/06/15 10:15:22 Total: 1G Used: 6533... See more...
Hello splunkers, I need your help to find a solution for the following issue. I have a log file as a source that I'm indexing as metrics Sample Event   2022/06/15 10:15:22 Total: 1G Used: 65332K Free: 960.2M     I'm able to index values in a metric index but I would like to convert everything to the same unit before doing this. I tried with eval but it doesn't work props.conf   DATETIME_CONFIG = LINE_BREAKER = ([\r\n]+) NO_BINARY_CHECK = true category = Custom pulldown_type = 1 TRANSFORMS-extract_test = fields_extract_test EVAL-Total = Total*100 METRIC-SCHEMA-TRANSFORMS = metric-schema:extract_metrics_test   transforms.conf   [fields_extract_test] REGEX = .*Total: (.*?)([A-Z]) Used: (.*?)([A-Z]) Free: (.*?)([A-Z]) FORMAT = Total::$1 Total_Unit::$2 Used::$3 Used_Unit::$4 Free::$5 Free_Unit::$6 WRITE_META = true [metric-schema:extract_metrics_test] METRIC-SCHEMA-MEASURES = _ALLNUMS_ METRIC-SCHEMA-WHITELIST-DIMS = Total,Total_Unit,Used,Used_Unit,Free,Free_Unit   How to do this? Thanks in advance
I have a panel which shows the usage of a dashboard in GMT timezone. Is it possible to show the same data in different timezones (PST, EST, IST, etc) as different lines in same chart? Below is the q... See more...
I have a panel which shows the usage of a dashboard in GMT timezone. Is it possible to show the same data in different timezones (PST, EST, IST, etc) as different lines in same chart? Below is the query which shows count in GMT timezones  index="_internal" user!="-" sourcetype=splunkd_ui_access "GET" "sample" | rex field=uri "\/app\/(?<App_Value>\w+)\/(?<dashboard>[^?\/]+)" | search App_Value="sample" dashboard = "daily_health" |timechart count  How can we modify this query to show in different timezone in single chart?
We have platform events in salesforce that gets published , So from splunk we need to subscribe to those events. How to do this in splunk please suggest
If an cloud application like Servicenow or Salesforce is integrated with central authentication like Azure AD for authenticating users, how can I identify user authentication logs for these specific ... See more...
If an cloud application like Servicenow or Salesforce is integrated with central authentication like Azure AD for authenticating users, how can I identify user authentication logs for these specific apps from Azure AD logs ? I am looking at logs using this query index=o365 sourcetype=o365:management:activity | stats count by vendor_product but most of these vendor products are microsoft based. I don't see any other cloud apps here. Would somebody be able to help me with this please ?
Hi, When I add all the details required on the Splunk add-on for office 365, I click add and then get the following error: 06-14-2022 20:01:41.224 +0500 ERROR ExecProcessor [432 ExecProcessor] - ... See more...
Hi, When I add all the details required on the Splunk add-on for office 365, I click add and then get the following error: 06-14-2022 20:01:41.224 +0500 ERROR ExecProcessor [432 ExecProcessor] - message from ""C:\Program Files\Splunk\bin\Python3.exe" "C:\Program Files\Splunk\etc\apps\splunk_app_infrastructure\bin\em_group_metadata_manager.py"" urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='127.0.0.1', port=8089): Max retries exceeded with url: /services/server/info (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x000001A64EF67848>: Failed to establish a new connection: [WinError 10061] No connection could be made because the target machine actively refused it')) And 2022-06-14 20:01:40,884 - pid:18576 tid:MainThread ERROR em_group_metadata_manager:94 - Failed to execute group metadata manager modular input -- Error: HTTPSConnectionPool(host='127.0.0.1', port=8089): Max retries exceeded with url: /services/server/info (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x000001A64EF67848>: Failed to establish a new connection: [WinError 10061] No connection could be made because the target machine actively refused it')) Traceback (most recent call last): File "C:\Program Files\Splunk\etc\apps\splunk_app_infrastructure\bin\external_lib\urllib3\connection.py", line 160, in _new_conn (self._dns_host, self.port), self.timeout, **extra_kw File "C:\Program Files\Splunk\etc\apps\splunk_app_infrastructure\bin\external_lib\urllib3\util\connection.py", line 84, in create_connection raise err File "C:\Program Files\Splunk\etc\apps\splunk_app_infrastructure\bin\external_lib\urllib3\util\connection.py", line 74, in create_connection sock.connect(sa) ConnectionRefusedError: [WinError 10061] No connection could be made because the target machine actively refused it During handling of the above exception, another exception occurred: Traceback (most recent call last): File "C:\Program Files\Splunk\etc\apps\splunk_app_infrastructure\bin\external_lib\urllib3\connectionpool.py", line 677, in urlopen chunked=chunked, File "C:\Program Files\Splunk\etc\apps\splunk_app_infrastructure\bin\external_lib\urllib3\connectionpool.py", line 381, in _make_request self._validate_conn(conn) File "C:\Program Files\Splunk\etc\apps\splunk_app_infrastructure\bin\external_lib\urllib3\connectionpool.py", line 978, in _validate_conn conn.connect() File "C:\Program Files\Splunk\etc\apps\splunk_app_infrastructure\bin\external_lib\urllib3\connection.py", line 309, in connect conn = self._new_conn() File "C:\Program Files\Splunk\etc\apps\splunk_app_infrastructure\bin\external_lib\urllib3\connection.py", line 172, in _new_conn self, "Failed to establish a new connection: %s" % e urllib3.exceptions.NewConnectionError: <urllib3.connection.HTTPSConnection object at 0x000001A64EF67848>: Failed to establish a new connection: [WinError 10061] No connection could be made because the target machine actively refused it During handling of the above exception, another exception occurred: Traceback (most recent call last): File "C:\Program Files\Splunk\etc\apps\splunk_app_infrastructure\bin\external_lib\solnlib\packages\requests\adapters.py", line 449, in send timeout=timeout File "C:\Program Files\Splunk\etc\apps\splunk_app_infrastructure\bin\external_lib\urllib3\connectionpool.py", line 727, in urlopen method, url, error=e, _pool=self, _stacktrace=sys.exc_info()[2] File "C:\Program Files\Splunk\etc\apps\splunk_app_infrastructure\bin\external_lib\urllib3\util\retry.py", line 446, in increment raise MaxRetryError(_pool, url, error or ResponseError(cause)) urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='127.0.0.1', port=8089): Max retries exceeded with url: /services/server/info (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x000001A64EF67848>: Failed to establish a new connection: [WinError 10061] No connection could be made because the target machine actively refused it')) During handling of the above exception, another exception occurred: Traceback (most recent call last): File "C:\Program Files\Splunk\etc\apps\splunk_app_infrastructure\bin\em_group_metadata_manager.py", line 89, in do_execute if not em_common.modular_input_should_run(session['authtoken'], logger=logger): File "C:\Program Files\Splunk\etc\apps\splunk_app_infrastructure\bin\common_libs\logging_utils\instrument.py", line 69, in wrapper retval = f(decorated_self, *args, **kwargs) File "C:\Program Files\Splunk\etc\apps\splunk_app_infrastructure\bin\em_common.py", line 342, in modular_input_should_run if not info.is_shc_member(): File "C:\Program Files\Splunk\etc\apps\splunk_app_infrastructure\bin\external_lib\solnlib\server_info.py", line 140, in is_shc_member server_info = self._server_info() File "C:\Program Files\Splunk\etc\apps\splunk_app_infrastructure\bin\external_lib\solnlib\utils.py", line 159, in wrapper return func(*args, **kwargs) File "C:\Program Files\Splunk\etc\apps\splunk_app_infrastructure\bin\external_lib\solnlib\server_info.py", line 62, in _server_info return self._rest_client.info File "C:\Program Files\Splunk\etc\apps\splunk_app_infrastructure\bin\external_lib\solnlib\packages\splunklib\client.py", line 463, in info response = self.get("/services/server/info") File "C:\Program Files\Splunk\etc\apps\splunk_app_infrastructure\bin\external_lib\solnlib\packages\splunklib\binding.py", line 289, in wrapper return request_fun(self, *args, **kwargs) File "C:\Program Files\Splunk\etc\apps\splunk_app_infrastructure\bin\external_lib\solnlib\packages\splunklib\binding.py", line 71, in new_f val = f(*args, **kwargs) File "C:\Program Files\Splunk\etc\apps\splunk_app_infrastructure\bin\external_lib\solnlib\packages\splunklib\binding.py", line 679, in get response = self.http.get(path, all_headers, **query) File "C:\Program Files\Splunk\etc\apps\splunk_app_infrastructure\bin\external_lib\solnlib\packages\splunklib\binding.py", line 1183, in get return self.request(url, { 'method': "GET", 'headers': headers }) File "C:\Program Files\Splunk\etc\apps\splunk_app_infrastructure\bin\external_lib\solnlib\packages\splunklib\binding.py", line 1241, in request response = self.handler(url, message, **kwargs) File "C:\Program Files\Splunk\etc\apps\splunk_app_infrastructure\bin\external_lib\solnlib\splunk_rest_client.py", line 145, in request verify=verify, proxies=proxies, cert=cert, **kwargs) File "C:\Program Files\Splunk\etc\apps\splunk_app_infrastructure\bin\external_lib\solnlib\packages\requests\api.py", line 60, in request return session.request(method=method, url=url, **kwargs) File "C:\Program Files\Splunk\etc\apps\splunk_app_infrastructure\bin\external_lib\solnlib\packages\requests\sessions.py", line 533, in request resp = self.send(prep, **send_kwargs) File "C:\Program Files\Splunk\etc\apps\splunk_app_infrastructure\bin\external_lib\solnlib\packages\requests\sessions.py", line 646, in send r = adapter.send(request, **kwargs) File "C:\Program Files\Splunk\etc\apps\splunk_app_infrastructure\bin\external_lib\solnlib\packages\requests\adapters.py", line 516, in send raise ConnectionError(e, request=request) solnlib.packages.requests.exceptions.ConnectionError: HTTPSConnectionPool(host='127.0.0.1', port=8089): Max retries exceeded with url: /services/server/info (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x000001A64EF67848>: Failed to establish a new connection: [WinError 10061] No connection could be made because the target machine actively refused it'))  
Hello anyone, I need your splunk expertise. I have this lookup that is captured from a first query. Now I want my second query to search the values in “domain” column, however, you look at domain c... See more...
Hello anyone, I need your splunk expertise. I have this lookup that is captured from a first query. Now I want my second query to search the values in “domain” column, however, you look at domain column it contains multiple values for domain and somehow when I query it, it reads it as one value instead of searching per line. So instead of searching just: 1.fhgvfshdvcshdcsdfce6352dgcvgdcagnbdcjsagdvcwe.aski**bleep**a.com And then 10.olskxqu287284y84fjwedwed2762391389hvhvivb87y38.aski**bleep**a.com And then 11.qu28snmkjsamclk287284y84fjwedwed27623xcaolskx.aski**bleep**a.com   It instead searches for domain “1.fhgvfshdvcshdcsdfce6352dgcvgdcagnbdcjsagdvcwe.aski**bleep**a.com 10.olskxqu287284y84fjwedwed2762391389hvhvivb87y38.aski**bleep**a.com 11.qu28snmkjsamclk287284y84fjwedwed27623xcaolskx.aski**bleep**a.com 12.njvh476xcaol4y84fjwedwed2764fncdjkasnmkjs.aski**bleep**a.com 13.caolskxqu2842fwefd9232476xcaolskscajcj47653.aski**bleep**a.com 14.jbdcwye6732hbsdjuhbjahsbayu723622gfwbfhsdbj.aski**bleep**a.com 15.2762391389hvhvivb87yqu28snmkjsamclk2.jwedwed2.aski**bleep**a.com 2.842fwefjwhbjhascajcjshbuwyrf6t376trf2gdvwqgdvqadqwscqw.gdyt326fgev.aski**bleep**a.com 3.842fwefjwhbjhascajcjsh76327dhqbd92324765364734snjvh348.qadqw.aski**bleep**a.com 4.ce6352ddcjsscajcj476536473bjhascajcjshbuwyrf6.aski**bleep**a.com 5.hgvdcywtewygcvhxcaolskxqu287284y84fncdjkasnmkjsamclk.aski**bleep**a.com 6.dcjsscajcj4vhxcaolskxqu28snmkjsamclk.aski**bleep**a.com 7.h76327dhqbd9232476xcaolskxqu2842fwefjwhbjhasc.aski**bleep**a.com 8.92324765364734snjvh476xcaolsjshdbc.lsk.aski**bleep**a.com 9.d9232476xcaolskscajcj476536473bjhaswyrf6.aski**bleep**a.com”  
Hello all, We are using an RSyslog to write logs to file in a Heavy Forwarder but we found that it was escaping tabs as #011. We found a solution that is apply to the file source a SEDCMD as follow... See more...
Hello all, We are using an RSyslog to write logs to file in a Heavy Forwarder but we found that it was escaping tabs as #011. We found a solution that is apply to the file source a SEDCMD as follows: inputs.conf   [monitor:///opt/splunk-data/<datafile>] sourcetype=<datasource>    props.conf   [source::///opt/splunk-data/<datafile>] SEDCMD-fix_tab = s/#011/ /g     We applied the configuration and restarted the HF and worked by about 15 minutes but then suddenly stopped to change this character by a tab. Why can this happen? Thank you!
Hi All,   We are trying to enable our splunk to pass logs to a Non Splunk System, we found out that we can configure the outputs.conf and add the below:   [tcpout] [tcpout:fastlane] server... See more...
Hi All,   We are trying to enable our splunk to pass logs to a Non Splunk System, we found out that we can configure the outputs.conf and add the below:   [tcpout] [tcpout:fastlane] server = <ip>:<port> sendCookedData = false   My question is, on the receiving end where will be those logs/data be stored? Is anyone have tried this or is anyone have an idea on this. Appreciate your help on this. Thanks!
Hello, I have a prebuilt panel that looks like this     <panel> <chart> <title>$titlePanel$</title> <search> ...     I'd like to call this prebuilt panel several times in my m... See more...
Hello, I have a prebuilt panel that looks like this     <panel> <chart> <title>$titlePanel$</title> <search> ...     I'd like to call this prebuilt panel several times in my main dashboard but I don't know how to set the token $titlePanel$ to have different tilte on each panel I tried this but not working     <form> <label>My Label</label> <row> <set token="titlePanel">Title 1</set> <panel id="chartPanel1" ref="my_prebuilt_panel"></panel> </row> <row> <set token="titlePanel">Title 2</set> <panel id="chartPanel2" ref="my_prebuilt_panel"></panel> </row> </form>      Is there a way to do this ? Thanks
Hello, I am trying to access DB using latin1 character as DB connector. However, the text is not output normally. I tried the following through JDBC URL in DB connection, but still no normal outpu... See more...
Hello, I am trying to access DB using latin1 character as DB connector. However, the text is not output normally. I tried the following through JDBC URL in DB connection, but still no normal output. jdbc:mysql://<ip>:<port>/<database>?characterEncoding=latin1 jdbc:mysql://<ip>:<port>/<database>?useUnicode=true&characterEncoding=latin1 jdbc:mysql://<ip>:<port>/<database>?useUnicode=true&characterEncoding=utf8 jdbc:mysql://<ip>:<port>/<database>?useUnicode=yes&characterEncoding=UTF-8 could you help me?   MySQL version: 5.1
Hello everyone,  I am having two events with different sourcetypes that have similar fields with similar values however not the same. I have found a way to combine the fields by using coalesce,... See more...
Hello everyone,  I am having two events with different sourcetypes that have similar fields with similar values however not the same. I have found a way to combine the fields by using coalesce, however I would like to combine the values as well in order to get a clear result I am running this search :   index="main" category="Foo" OR sourcetype="foo" | iplocation ip_address | eval severity_level = coalesce(severity, foo_severity) | geostats count by severity_level   and I am having the following results.    | longitude | latitude | HIGH | High | MEDIUM | Medium | LOW | Low | | 143.2104 | -33.494 | 39 | 4 | 40 | 30 | 15 | 5 |     And I want to get something like:   | longitude | latitude | HIGH | MEDIUM | LOW | | 143.2104 | -33.494 | 43 | 70 | 20 |     Could you please give a hint? Thank you very much in advance.
Hi All, I need your urgent help in fixing one of the issue in my PROD environment.  we have an application log which rotates twice daily. once in the afternoon and once around midnight. Logs star... See more...
Hi All, I need your urgent help in fixing one of the issue in my PROD environment.  we have an application log which rotates twice daily. once in the afternoon and once around midnight. Logs starts feeding in splunk when log rotates around afternoon and stops feeding when log rotates around midnight. if we do some minor changes to inputs like adding any extra parameter to inputs.conf if starts feeding again and then stops again in few seconds. This is my inputs.conf [monitor:///var/log/logpath/logpath/xx*.log] sourcetype = abcd disabled = false index = xyz   this is my props.conf [abcd] SHOULD_LINEMERGE=TRUE BREAK_ONLY_BEFORE = \w\|\d+\|\d{2}:\d{2}:\d{2}\.\d{6} MAX_TIMESTAMP_LOOKAHEAD = 15 NO_BINARY_CHECK = true TIME_FORMAT = %H:%M:%S.%6N TIME_PREFIX = \w.*\|\d*\| category = Custom disabled = false pulldown_type = true TRUNCATE=50000 MAX_EVENTS = 9999 Please let me know if any other information is required here. Any help here will be highly appreciated. Thanks in advance Prateek 
I recently learned that it is best practice to use the Monitoring Console to manage our Splunk servers instead of installing Universal Forwarders on them, how then do we run a search across all of ou... See more...
I recently learned that it is best practice to use the Monitoring Console to manage our Splunk servers instead of installing Universal Forwarders on them, how then do we run a search across all of our Splunk servers Event Logs to for instance see how long each one was up for?  I have the query and I can run it against all of our other servers that do have the Universal Forwarder installed on them and it works great, but when I query the wineventlog index it finds none of our Splunk servers in it
Hello team,   we are looking for an incident management solution and wish to try out Splunk On Call but we were not able to start a trial from your product page as we are unable to submit the trial... See more...
Hello team,   we are looking for an incident management solution and wish to try out Splunk On Call but we were not able to start a trial from your product page as we are unable to submit the trial form and we get the following error    any ideas about what we might be doing wrong here?
so recently I went to troubleshoot some servers that were not showing up in our queries and that's when I discovered that the ones that work that actually send their Even Log data to our Indexers the... See more...
so recently I went to troubleshoot some servers that were not showing up in our queries and that's when I discovered that the ones that work that actually send their Even Log data to our Indexers they do not have an Outputs.conf file, how can that be? in the etc\system\local that is 
Hi all,  I am working with logs in splunk and here I need to to capture the word before date and time field and the word after it. ERROR 2022-06-09 xyz-abc So, using regular expression i wanted... See more...
Hi all,  I am working with logs in splunk and here I need to to capture the word before date and time field and the word after it. ERROR 2022-06-09 xyz-abc So, using regular expression i wanted to extract the word "error" and "xyz-abc"  but it is not necessarily the starting of log this phrase can be anywhere  in the log like log1: ERROR 2022-06-09 xay-abc  connecting to network. log2: java.net.spring ERROR 2022-06-09 connecting to network. so, please help me with a solution so that I can extract the field which contains error and the other field which contains abc-xyz. thanks in advance
I setup a Controller audit report on both our SaaS Controllers and received the report over email. There's nothing in the email or the report which indicates which report is for which of the two cont... See more...
I setup a Controller audit report on both our SaaS Controllers and received the report over email. There's nothing in the email or the report which indicates which report is for which of the two controllers. As the email is also sent as AppDynamics Reports <noreply@appdynamics.com> we have no way of telling the reports apart.
so I want to know how long our Splunk servers have been up for, I got the query and it works great on hundreds of other servers but not on our two dozen Splunk servers (Cluster Master, Deployment Ser... See more...
so I want to know how long our Splunk servers have been up for, I got the query and it works great on hundreds of other servers but not on our two dozen Splunk servers (Cluster Master, Deployment Servers, Indexers, Search Heads, etc.) I think it is because we do not have the Universal Forwarder installed on them, so can we install it on the Splunk servers or am I dense and missing something and we can just use some of the Splunk Enterprise component to send Even Log data to our Indexers
To extract the specific part for data from the file path, C:/Users/USSACDev/AppData/Local/Temp/WindowsAETemp/35018_2225424_1655272292585 C:/Users/USSACDev/AppData/Local/Temp/WindowsAETe... See more...
To extract the specific part for data from the file path, C:/Users/USSACDev/AppData/Local/Temp/WindowsAETemp/35018_2225424_1655272292585 C:/Users/USSACDev/AppData/Local/Temp/WindowsAETemp/35018_2225421_1655272247058 The bolded part should be extracted, as i already extracted by a regex command, | rex "\"aeci\".*\"temp_path\":\s+\"(?<activity_id>[^\"]+)" By the above command, I got the file path, In that i need the specifies bolded part, Can anyone help in this, Provide the single regex command that should include above specified regex command, like, it should be from, aeci (used in above regex) .