All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello All, i have checked the  URLs in user experience ( pages & AJAX requests ) there is alot of urls don't have requests  ( 0 requests ) and we delete them manually. So, I thought of  some enhance... See more...
Hello All, i have checked the  URLs in user experience ( pages & AJAX requests ) there is alot of urls don't have requests  ( 0 requests ) and we delete them manually. So, I thought of  some enhancement. However, I want to know if we automated deleting the URLs which having 0 requests 1) why do we have URLs with 0 requests?  2) Can we automate the delete activity? If yes, what is the improvement in the tool in automating this step? 3) What is the consequences from this step?  Thanks in advance  Omneya                                                                                                              
Hi,   I'm trying to generate a report with the following information -Total Bandwidth for each user -List of top 3 (Bandwidth usage) URLs for each user -Bandwidth for each URL For example... See more...
Hi,   I'm trying to generate a report with the following information -Total Bandwidth for each user -List of top 3 (Bandwidth usage) URLs for each user -Bandwidth for each URL For example   Thank you!
Looking to brush off the cobwebs of my Splunk use and wanted to find a simple query of server activity/traffic for a server on our domain.  If anyone has a basic query they use on a regular basis to ... See more...
Looking to brush off the cobwebs of my Splunk use and wanted to find a simple query of server activity/traffic for a server on our domain.  If anyone has a basic query they use on a regular basis to see traffic on their servers, I'd appreciate if you could share it, once I get the basic syntax, I can take it from there.
Has anyone created a data visualization add-on or app for stock analysis - I have searched splunkbase extensively... I want to display open high low close data for stock tracking using a candlestick ... See more...
Has anyone created a data visualization add-on or app for stock analysis - I have searched splunkbase extensively... I want to display open high low close data for stock tracking using a candlestick view model but can't really find an existing visualization that is able to display stock data in a candlestick view? any thoughs or suggestions?? thankyou
I was going through the tutorial to build "your first app" on the Splunk Development site here, and I could not get the api call to create an index.   Running on a windows 10 Development box (tri... See more...
I was going through the tutorial to build "your first app" on the Splunk Development site here, and I could not get the api call to create an index.   Running on a windows 10 Development box (trial license). Splunk Enterprise Version:8.2.6 Build:a6fe1ee8894b   The command below fails and I am not sure why.  I can use one of the other two options (CLI or WebUI) to create the index, but wanted to know why the REST API option failed.   C:\apps\splunk\bin>curl -k -u "user":"password" https://localhost:8089/servicesNS/admin/search/data/indexes -d name="devtutorial" <?xml version="1.0" encoding="UTF-8"?> <response> <messages> <msg type="ERROR">Action forbidden.</msg> </messages> </response>   Apologies for the formatting, but when I tried to insert it as code, it said it was invalid. I have included an image version below. Thank you.  
Hello Splunkers,   After my own unsuccessful researches, I thought you may have the answer.  So, I'm wondering if there is a way to make the thruput variable. Indeed,  my search peer may have... See more...
Hello Splunkers,   After my own unsuccessful researches, I thought you may have the answer.  So, I'm wondering if there is a way to make the thruput variable. Indeed,  my search peer may have a too large amount of data to index at a time due to a network issue, and I would like to spread out the indexing during the night for example. So is there a way to set a throughput ([thruput]) limit when my server is the most asked and unset this limit when it is less used?   Thanks in advance for your time and your answer! Regards, Antoine 
Hi everyone, i want to use the below command in a single line. i have tried "comma" but it's not working. How do i do it? |eval comments= if(Action="create","something has been created",'commen... See more...
Hi everyone, i want to use the below command in a single line. i have tried "comma" but it's not working. How do i do it? |eval comments= if(Action="create","something has been created",'comments') |eval comments= if(Action="delete","something  has been deleted",'comments') Thanks.
Hey everyone and I hope your having a great day! I have configured a custom field extraction in the Splunk search app for my sourcetype but I don't have the possibility to share them with other user... See more...
Hey everyone and I hope your having a great day! I have configured a custom field extraction in the Splunk search app for my sourcetype but I don't have the possibility to share them with other users like I can do with another Splunk instance where I have the role Power (With Power role, I can share it no problem). I don't want to assign myself the Power role since it's broad and wouldn't follow the rule of least privilege. For this reason which permission would I need to assign myself in order to be able to share my data extraction with other users?
Hi,   I'm wondering if there isn't an issue with the correlation search that comes with Splunk ES "Threat activity detected".  Indeed, my problem come from the fact that when it's triggered then I... See more...
Hi,   I'm wondering if there isn't an issue with the correlation search that comes with Splunk ES "Threat activity detected".  Indeed, my problem come from the fact that when it's triggered then I have at least 2 other alerts concerning the "24h thresold risk score" (RBA).    I have taken the original correlation search (at least I think it is)  | from datamodel:"Threat_Intelligence"."Threat_Activity" | dedup threat_match_field,threat_match_value | `get_event_id` | table _raw,event_id,source,src,dest,src_user,user,threat*,weight | rename weight as record_weight | `per_panel_filter("ppf_threat_activity","threat_match_field,threat_match_value")` | `get_threat_attribution(threat_key)` | rename source_* as threat_source_*,description as threat_description | fields - *time | eval risk_score=case(isnum(record_weight), record_weight, isnum(weight) AND weight=1, 60, isnum(weight), weight, 1=1, null()), risk_system=if(threat_match_field IN("query", "answer"),threat_match_value,null()), risk_hash=if(threat_match_field IN("file_hash"),null(),threat_match_value), risk_network=if(threat_match_field IN("http_user_agent", "url") OR threat_match_field LIKE "certificate_%",null(),threat_match_value), risk_host=if(threat_match_field IN("file_name", "process", "service") OR threat_match_field LIKE "registry_%",null(),threat_match_value), risk_other=if(threat_match_field IN("query", "answer", "src", "dest", "src_user", "user", "file_hash", "http_user_agent", "url", "file_name", "process", "service") OR threat_match_field LIKE "certificate_%" OR threat_match_field LIKE "registry_%",null(),threat_match_value)  And notice that the mechanism to select which type of risk category is concerned is changing after the first line.    1.  Risk_system  risk_system=if(threat_match_field IN("query", "answer"),threat_match_value,null()), If I translate : If the threat_match_field is "query or "answer" then the risk category is system and risk_system="IOC that matched" In this case this is a domain or URL (because it's a DNS query or answer) --> THIS LINE IS GOOD 2. Risk_hash risk_hash=if(threat_match_field IN("file_hash"),null(),threat_match_value), But in the case of hash, if I translate : If the threat_match_field is "file_hash" then the risk category is NOT hash and risk_hash="null" --> THIS LINE IS WRONG Then it is the same for all other category : network, host, other   So in my opinion the values in the if statement were reversed.  risk_hash=if(threat_match_field IN("file_hash"),null(),threat_match_value), shoud be  risk_hash=if(threat_match_field IN("file_hash"),threat_match_value, null()),   Is it me ? My instance ? or what ? Thanks in advance Xavier
Hi community,  I have 2 different lists with fields as follow : list A - ip_address, source, account_id list B - ip_address, source, account_id, field4, field5 I want to compare both lists to acc... See more...
Hi community,  I have 2 different lists with fields as follow : list A - ip_address, source, account_id list B - ip_address, source, account_id, field4, field5 I want to compare both lists to accomplish list(B) - list(A), ie. remain only list(B) entries with unique ip_address value, comparing to list(A) entries, while also return the field value of field4 and field5.  Example  list A ip_address source account_id 10.0.0.1 A 1000 192.168.0.1 A 1001   list B  ip_address source account_id field4 field5 10.0.0.2 B 999 xxx yyyy  192.168.0.1 B 1001 xxy yyyx   Result ip_address source account_id field4 field5 10.0.0.2 B 999 xxx yyyy    I have tried the following : index=seceng source="listB" | eval source="B" | fields ip_address source account_id field4 field5 | append [ | inputlookup listA   | eval source="A"   | fields ip_address source account_id] | stats values(source) as source, count by ip_address account_id field4 field5 | where count == 1 AND source == "B" The issue of this query is that since field4 and field5 are unique attributes for list(B) only, thus the stats query will only return list(B) entries. It works when the field4 and field5 removed from the stats query, but they are the attributes that I want to include to the result. Can anyone give me suggestion of how the expected result can be accomplished ? Really appreciate that, and thanks in advance !
Hello splunkers, I need your help to find a solution for the following issue. I have a log file as a source that I'm indexing as metrics Sample Event   2022/06/15 10:15:22 Total: 1G Used: 6533... See more...
Hello splunkers, I need your help to find a solution for the following issue. I have a log file as a source that I'm indexing as metrics Sample Event   2022/06/15 10:15:22 Total: 1G Used: 65332K Free: 960.2M     I'm able to index values in a metric index but I would like to convert everything to the same unit before doing this. I tried with eval but it doesn't work props.conf   DATETIME_CONFIG = LINE_BREAKER = ([\r\n]+) NO_BINARY_CHECK = true category = Custom pulldown_type = 1 TRANSFORMS-extract_test = fields_extract_test EVAL-Total = Total*100 METRIC-SCHEMA-TRANSFORMS = metric-schema:extract_metrics_test   transforms.conf   [fields_extract_test] REGEX = .*Total: (.*?)([A-Z]) Used: (.*?)([A-Z]) Free: (.*?)([A-Z]) FORMAT = Total::$1 Total_Unit::$2 Used::$3 Used_Unit::$4 Free::$5 Free_Unit::$6 WRITE_META = true [metric-schema:extract_metrics_test] METRIC-SCHEMA-MEASURES = _ALLNUMS_ METRIC-SCHEMA-WHITELIST-DIMS = Total,Total_Unit,Used,Used_Unit,Free,Free_Unit   How to do this? Thanks in advance
I have a panel which shows the usage of a dashboard in GMT timezone. Is it possible to show the same data in different timezones (PST, EST, IST, etc) as different lines in same chart? Below is the q... See more...
I have a panel which shows the usage of a dashboard in GMT timezone. Is it possible to show the same data in different timezones (PST, EST, IST, etc) as different lines in same chart? Below is the query which shows count in GMT timezones  index="_internal" user!="-" sourcetype=splunkd_ui_access "GET" "sample" | rex field=uri "\/app\/(?<App_Value>\w+)\/(?<dashboard>[^?\/]+)" | search App_Value="sample" dashboard = "daily_health" |timechart count  How can we modify this query to show in different timezone in single chart?
We have platform events in salesforce that gets published , So from splunk we need to subscribe to those events. How to do this in splunk please suggest
If an cloud application like Servicenow or Salesforce is integrated with central authentication like Azure AD for authenticating users, how can I identify user authentication logs for these specific ... See more...
If an cloud application like Servicenow or Salesforce is integrated with central authentication like Azure AD for authenticating users, how can I identify user authentication logs for these specific apps from Azure AD logs ? I am looking at logs using this query index=o365 sourcetype=o365:management:activity | stats count by vendor_product but most of these vendor products are microsoft based. I don't see any other cloud apps here. Would somebody be able to help me with this please ?
Hi, When I add all the details required on the Splunk add-on for office 365, I click add and then get the following error: 06-14-2022 20:01:41.224 +0500 ERROR ExecProcessor [432 ExecProcessor] - ... See more...
Hi, When I add all the details required on the Splunk add-on for office 365, I click add and then get the following error: 06-14-2022 20:01:41.224 +0500 ERROR ExecProcessor [432 ExecProcessor] - message from ""C:\Program Files\Splunk\bin\Python3.exe" "C:\Program Files\Splunk\etc\apps\splunk_app_infrastructure\bin\em_group_metadata_manager.py"" urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='127.0.0.1', port=8089): Max retries exceeded with url: /services/server/info (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x000001A64EF67848>: Failed to establish a new connection: [WinError 10061] No connection could be made because the target machine actively refused it')) And 2022-06-14 20:01:40,884 - pid:18576 tid:MainThread ERROR em_group_metadata_manager:94 - Failed to execute group metadata manager modular input -- Error: HTTPSConnectionPool(host='127.0.0.1', port=8089): Max retries exceeded with url: /services/server/info (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x000001A64EF67848>: Failed to establish a new connection: [WinError 10061] No connection could be made because the target machine actively refused it')) Traceback (most recent call last): File "C:\Program Files\Splunk\etc\apps\splunk_app_infrastructure\bin\external_lib\urllib3\connection.py", line 160, in _new_conn (self._dns_host, self.port), self.timeout, **extra_kw File "C:\Program Files\Splunk\etc\apps\splunk_app_infrastructure\bin\external_lib\urllib3\util\connection.py", line 84, in create_connection raise err File "C:\Program Files\Splunk\etc\apps\splunk_app_infrastructure\bin\external_lib\urllib3\util\connection.py", line 74, in create_connection sock.connect(sa) ConnectionRefusedError: [WinError 10061] No connection could be made because the target machine actively refused it During handling of the above exception, another exception occurred: Traceback (most recent call last): File "C:\Program Files\Splunk\etc\apps\splunk_app_infrastructure\bin\external_lib\urllib3\connectionpool.py", line 677, in urlopen chunked=chunked, File "C:\Program Files\Splunk\etc\apps\splunk_app_infrastructure\bin\external_lib\urllib3\connectionpool.py", line 381, in _make_request self._validate_conn(conn) File "C:\Program Files\Splunk\etc\apps\splunk_app_infrastructure\bin\external_lib\urllib3\connectionpool.py", line 978, in _validate_conn conn.connect() File "C:\Program Files\Splunk\etc\apps\splunk_app_infrastructure\bin\external_lib\urllib3\connection.py", line 309, in connect conn = self._new_conn() File "C:\Program Files\Splunk\etc\apps\splunk_app_infrastructure\bin\external_lib\urllib3\connection.py", line 172, in _new_conn self, "Failed to establish a new connection: %s" % e urllib3.exceptions.NewConnectionError: <urllib3.connection.HTTPSConnection object at 0x000001A64EF67848>: Failed to establish a new connection: [WinError 10061] No connection could be made because the target machine actively refused it During handling of the above exception, another exception occurred: Traceback (most recent call last): File "C:\Program Files\Splunk\etc\apps\splunk_app_infrastructure\bin\external_lib\solnlib\packages\requests\adapters.py", line 449, in send timeout=timeout File "C:\Program Files\Splunk\etc\apps\splunk_app_infrastructure\bin\external_lib\urllib3\connectionpool.py", line 727, in urlopen method, url, error=e, _pool=self, _stacktrace=sys.exc_info()[2] File "C:\Program Files\Splunk\etc\apps\splunk_app_infrastructure\bin\external_lib\urllib3\util\retry.py", line 446, in increment raise MaxRetryError(_pool, url, error or ResponseError(cause)) urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='127.0.0.1', port=8089): Max retries exceeded with url: /services/server/info (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x000001A64EF67848>: Failed to establish a new connection: [WinError 10061] No connection could be made because the target machine actively refused it')) During handling of the above exception, another exception occurred: Traceback (most recent call last): File "C:\Program Files\Splunk\etc\apps\splunk_app_infrastructure\bin\em_group_metadata_manager.py", line 89, in do_execute if not em_common.modular_input_should_run(session['authtoken'], logger=logger): File "C:\Program Files\Splunk\etc\apps\splunk_app_infrastructure\bin\common_libs\logging_utils\instrument.py", line 69, in wrapper retval = f(decorated_self, *args, **kwargs) File "C:\Program Files\Splunk\etc\apps\splunk_app_infrastructure\bin\em_common.py", line 342, in modular_input_should_run if not info.is_shc_member(): File "C:\Program Files\Splunk\etc\apps\splunk_app_infrastructure\bin\external_lib\solnlib\server_info.py", line 140, in is_shc_member server_info = self._server_info() File "C:\Program Files\Splunk\etc\apps\splunk_app_infrastructure\bin\external_lib\solnlib\utils.py", line 159, in wrapper return func(*args, **kwargs) File "C:\Program Files\Splunk\etc\apps\splunk_app_infrastructure\bin\external_lib\solnlib\server_info.py", line 62, in _server_info return self._rest_client.info File "C:\Program Files\Splunk\etc\apps\splunk_app_infrastructure\bin\external_lib\solnlib\packages\splunklib\client.py", line 463, in info response = self.get("/services/server/info") File "C:\Program Files\Splunk\etc\apps\splunk_app_infrastructure\bin\external_lib\solnlib\packages\splunklib\binding.py", line 289, in wrapper return request_fun(self, *args, **kwargs) File "C:\Program Files\Splunk\etc\apps\splunk_app_infrastructure\bin\external_lib\solnlib\packages\splunklib\binding.py", line 71, in new_f val = f(*args, **kwargs) File "C:\Program Files\Splunk\etc\apps\splunk_app_infrastructure\bin\external_lib\solnlib\packages\splunklib\binding.py", line 679, in get response = self.http.get(path, all_headers, **query) File "C:\Program Files\Splunk\etc\apps\splunk_app_infrastructure\bin\external_lib\solnlib\packages\splunklib\binding.py", line 1183, in get return self.request(url, { 'method': "GET", 'headers': headers }) File "C:\Program Files\Splunk\etc\apps\splunk_app_infrastructure\bin\external_lib\solnlib\packages\splunklib\binding.py", line 1241, in request response = self.handler(url, message, **kwargs) File "C:\Program Files\Splunk\etc\apps\splunk_app_infrastructure\bin\external_lib\solnlib\splunk_rest_client.py", line 145, in request verify=verify, proxies=proxies, cert=cert, **kwargs) File "C:\Program Files\Splunk\etc\apps\splunk_app_infrastructure\bin\external_lib\solnlib\packages\requests\api.py", line 60, in request return session.request(method=method, url=url, **kwargs) File "C:\Program Files\Splunk\etc\apps\splunk_app_infrastructure\bin\external_lib\solnlib\packages\requests\sessions.py", line 533, in request resp = self.send(prep, **send_kwargs) File "C:\Program Files\Splunk\etc\apps\splunk_app_infrastructure\bin\external_lib\solnlib\packages\requests\sessions.py", line 646, in send r = adapter.send(request, **kwargs) File "C:\Program Files\Splunk\etc\apps\splunk_app_infrastructure\bin\external_lib\solnlib\packages\requests\adapters.py", line 516, in send raise ConnectionError(e, request=request) solnlib.packages.requests.exceptions.ConnectionError: HTTPSConnectionPool(host='127.0.0.1', port=8089): Max retries exceeded with url: /services/server/info (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x000001A64EF67848>: Failed to establish a new connection: [WinError 10061] No connection could be made because the target machine actively refused it'))  
Hello anyone, I need your splunk expertise. I have this lookup that is captured from a first query. Now I want my second query to search the values in “domain” column, however, you look at domain c... See more...
Hello anyone, I need your splunk expertise. I have this lookup that is captured from a first query. Now I want my second query to search the values in “domain” column, however, you look at domain column it contains multiple values for domain and somehow when I query it, it reads it as one value instead of searching per line. So instead of searching just: 1.fhgvfshdvcshdcsdfce6352dgcvgdcagnbdcjsagdvcwe.aski**bleep**a.com And then 10.olskxqu287284y84fjwedwed2762391389hvhvivb87y38.aski**bleep**a.com And then 11.qu28snmkjsamclk287284y84fjwedwed27623xcaolskx.aski**bleep**a.com   It instead searches for domain “1.fhgvfshdvcshdcsdfce6352dgcvgdcagnbdcjsagdvcwe.aski**bleep**a.com 10.olskxqu287284y84fjwedwed2762391389hvhvivb87y38.aski**bleep**a.com 11.qu28snmkjsamclk287284y84fjwedwed27623xcaolskx.aski**bleep**a.com 12.njvh476xcaol4y84fjwedwed2764fncdjkasnmkjs.aski**bleep**a.com 13.caolskxqu2842fwefd9232476xcaolskscajcj47653.aski**bleep**a.com 14.jbdcwye6732hbsdjuhbjahsbayu723622gfwbfhsdbj.aski**bleep**a.com 15.2762391389hvhvivb87yqu28snmkjsamclk2.jwedwed2.aski**bleep**a.com 2.842fwefjwhbjhascajcjshbuwyrf6t376trf2gdvwqgdvqadqwscqw.gdyt326fgev.aski**bleep**a.com 3.842fwefjwhbjhascajcjsh76327dhqbd92324765364734snjvh348.qadqw.aski**bleep**a.com 4.ce6352ddcjsscajcj476536473bjhascajcjshbuwyrf6.aski**bleep**a.com 5.hgvdcywtewygcvhxcaolskxqu287284y84fncdjkasnmkjsamclk.aski**bleep**a.com 6.dcjsscajcj4vhxcaolskxqu28snmkjsamclk.aski**bleep**a.com 7.h76327dhqbd9232476xcaolskxqu2842fwefjwhbjhasc.aski**bleep**a.com 8.92324765364734snjvh476xcaolsjshdbc.lsk.aski**bleep**a.com 9.d9232476xcaolskscajcj476536473bjhaswyrf6.aski**bleep**a.com”  
Hello all, We are using an RSyslog to write logs to file in a Heavy Forwarder but we found that it was escaping tabs as #011. We found a solution that is apply to the file source a SEDCMD as follow... See more...
Hello all, We are using an RSyslog to write logs to file in a Heavy Forwarder but we found that it was escaping tabs as #011. We found a solution that is apply to the file source a SEDCMD as follows: inputs.conf   [monitor:///opt/splunk-data/<datafile>] sourcetype=<datasource>    props.conf   [source::///opt/splunk-data/<datafile>] SEDCMD-fix_tab = s/#011/ /g     We applied the configuration and restarted the HF and worked by about 15 minutes but then suddenly stopped to change this character by a tab. Why can this happen? Thank you!
Hi All,   We are trying to enable our splunk to pass logs to a Non Splunk System, we found out that we can configure the outputs.conf and add the below:   [tcpout] [tcpout:fastlane] server... See more...
Hi All,   We are trying to enable our splunk to pass logs to a Non Splunk System, we found out that we can configure the outputs.conf and add the below:   [tcpout] [tcpout:fastlane] server = <ip>:<port> sendCookedData = false   My question is, on the receiving end where will be those logs/data be stored? Is anyone have tried this or is anyone have an idea on this. Appreciate your help on this. Thanks!
Hello, I have a prebuilt panel that looks like this     <panel> <chart> <title>$titlePanel$</title> <search> ...     I'd like to call this prebuilt panel several times in my m... See more...
Hello, I have a prebuilt panel that looks like this     <panel> <chart> <title>$titlePanel$</title> <search> ...     I'd like to call this prebuilt panel several times in my main dashboard but I don't know how to set the token $titlePanel$ to have different tilte on each panel I tried this but not working     <form> <label>My Label</label> <row> <set token="titlePanel">Title 1</set> <panel id="chartPanel1" ref="my_prebuilt_panel"></panel> </row> <row> <set token="titlePanel">Title 2</set> <panel id="chartPanel2" ref="my_prebuilt_panel"></panel> </row> </form>      Is there a way to do this ? Thanks
Hello, I am trying to access DB using latin1 character as DB connector. However, the text is not output normally. I tried the following through JDBC URL in DB connection, but still no normal outpu... See more...
Hello, I am trying to access DB using latin1 character as DB connector. However, the text is not output normally. I tried the following through JDBC URL in DB connection, but still no normal output. jdbc:mysql://<ip>:<port>/<database>?characterEncoding=latin1 jdbc:mysql://<ip>:<port>/<database>?useUnicode=true&characterEncoding=latin1 jdbc:mysql://<ip>:<port>/<database>?useUnicode=true&characterEncoding=utf8 jdbc:mysql://<ip>:<port>/<database>?useUnicode=yes&characterEncoding=UTF-8 could you help me?   MySQL version: 5.1