All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello, I have a regex question. I have a field called "Container" and below are the examples of the values. I would like to regex a certain part of the value but unfortunately, there's no unique... See more...
Hello, I have a regex question. I have a field called "Container" and below are the examples of the values. I would like to regex a certain part of the value but unfortunately, there's no unique marker to tell it where to start/stop. However, I noticed that there's always 3 underscores before that specific part that I need to extract so probably that could be helpful for the regex. Can you help me with the regex expression (starts after the 3rd underscore and ends before the next underscore)? 1) k8s_jenkins_jenkins-16-mrlz4_tau-ops_eb099c1d-6d70-11ea-8ba8-001a4a160104_0 2) k8s_datadog-agent_datadog-agent-t4dlc_clusteradmin_dd5f238b-6a16-11ea-8ef9-566f4e1c0167_351 3) k8s_core-order-service_core-order-service-deployment-1-t9b29_fltc-ods-uit_b10cf94d-64b1-11ea-8ef9-566f4e1c0167_3513 Desired regex result for Container field: 1) tau-ops 2) clusteradmin 3) fltc-ods-uit Thank you in advance.
Hi, I have installed a new Linux forwader but I am not able to see logs from this forwader. I have set the deploymentclient.conf and restarted the forwader couple of times . I don't see any l... See more...
Hi, I have installed a new Linux forwader but I am not able to see logs from this forwader. I have set the deploymentclient.conf and restarted the forwader couple of times . I don't see any logs from this forwader in internal index. I am getting the below error: ERROR TcpOutputProc - LightWeightForwarder/UniversalForwarder not configured. Please configure outputs.conf. my outputs.conf [tcpout] defaultGroup = HF_group [tcpout:HF_group] autoLBFrequency=60 server = myserver:9997 I have seen couple of links of enabling the listener port but coudnot fix my issue from that. P.S my forwader send the logs to Heavy forwader. Any solution to fix this issue? Thanks in advance:)
I have existing lookup csv. I want to update a row with new value. ID Name Location 549 Test_1 Bangalore 549 Test_2 Delhi 729 Test_3 Mumbai 549 Test_4 Bangalore 729 Test_5 Bangal... See more...
I have existing lookup csv. I want to update a row with new value. ID Name Location 549 Test_1 Bangalore 549 Test_2 Delhi 729 Test_3 Mumbai 549 Test_4 Bangalore 729 Test_5 Bangalore Test_4 will be replace with Test_8 and my lookup table will be look like as below ID Name Location 549 Test_1 Bangalore 549 Test_2 Delhi 729 Test_3 Mumbai 549 Test_8 Bangalore 729 Test_5 Bangalore How can I achieve it through search query. Regards Raja
Hi, I have installed a new Linux forwader,but I am not able to see the logs from this forwader. I have configured deployments.conf,have restarted forwader a couple of times but still getting th... See more...
Hi, I have installed a new Linux forwader,but I am not able to see the logs from this forwader. I have configured deployments.conf,have restarted forwader a couple of times but still getting the below error and I don't see any logs in the internal index from tis forwader. P.S the forwader I installed will send the logs to Heavy forwader. *03-27-2020 07:59:16.055 -0700 ERROR TcpOutputProc - LightWeightForwarder/UniversalForwarder not configured. Please configure outputs.conf* my output.conf [tcpout] defaultGroup = HF_group_1 [tcpout:HF_group_1] autoLBFrequency=20 server = myserver:9997 I have seen couple of links to enable the listener port,but could not fix my issue: thanks in advance:)
Hi there, does anybody have information about if and when there will be an updated version with splunk 8 / python 3 support? The current version explicitly states python 2.7 as requirement.,Hi the... See more...
Hi there, does anybody have information about if and when there will be an updated version with splunk 8 / python 3 support? The current version explicitly states python 2.7 as requirement.,Hi there, does anybody know if / when there will be an updated version for splunk 8 with python 3 support? The current version explicitly states python 2.7 as required in the documentation. Thanks a lot!
We have a splunk dashboard with traffic lights implemented as mentioned in the article - https://answers.splunk.com/answers/309024/is-it-possible-to-show-the-traffic-light-visualiza.html Though the... See more...
We have a splunk dashboard with traffic lights implemented as mentioned in the article - https://answers.splunk.com/answers/309024/is-it-possible-to-show-the-traffic-light-visualiza.html Though the lights appear fine as expected, on click nothing happens. Ideally if its a chart on click search window would open with corresponding events. Is there someway i can achieve the same.
Hi, pardon if my question is too obvious, am a Splunk noob. My requirement is: I have a search String , example "Error occured in getting ID....". In the result log of this search is included a u... See more...
Hi, pardon if my question is too obvious, am a Splunk noob. My requirement is: I have a search String , example "Error occured in getting ID....". In the result log of this search is included a unique ID (there are multiple logs for each time that searh string is matched, unique ID different for all such logs). I want to further search for one specific log that has same unique ID plus some other text. E.g - This is search string query: index=index "Error finding xID for ID:" | dedup uniquId This is one block of the result: {"level":"ERROR","uniqueId":"48b3825e993981df25d13670 1", "message":"Error finding xID for ID:1234"} What I require is, to be able to further search based on uniqueId in log dynamcially. E.g: | <uniqueId> "some other search string"
I have tried to configure the AAD Users input, however whenever it runs it fails with the following error: 2020-03-30 23:30:53,493 ERROR pid=10875 tid=MainThread file=base_modinput.py:log_error:3... See more...
I have tried to configure the AAD Users input, however whenever it runs it fails with the following error: 2020-03-30 23:30:53,493 ERROR pid=10875 tid=MainThread file=base_modinput.py:log_error:307 | Get error when collecting events. Traceback (most recent call last): File "/opt/splunk/etc/apps/TA-MS-AAD/bin/ta_ms_aad/modinput_wrapper/base_modinput.py", line 127, in stream_events self.collect_events(ew) File "/opt/splunk/etc/apps/TA-MS-AAD/bin/MS_AAD_user.py", line 72, in collect_events input_module.collect_events(self, ew) File "/opt/splunk/etc/apps/TA-MS-AAD/bin/input_module_MS_AAD_user.py", line 29, in collect_events users = azutils.get_items(helper, access_token, url) File "/opt/splunk/etc/apps/TA-MS-AAD/bin/ta_azure_utils/utils.py", line 33, in get_items raise e RuntimeError: maximum recursion depth exceeded while calling a Python object We have over 100k users in AAD so I'm wondering if this is more then the add-on can handle?
Hello Everyone, I want to create a dashboard drill down based on filed values. example: I have a dashboard with a field name "Product" which contains various list of product such as product A, ... See more...
Hello Everyone, I want to create a dashboard drill down based on filed values. example: I have a dashboard with a field name "Product" which contains various list of product such as product A, product B etc.. Now I want to create a drill down dashboard, based on filed values, If i click in Product A it has to take to a dashboard for product A and similarly to product B and respective Dashboards. Thanks
I recently noticed that the UI for lookup definitions now has an advanced checkbox. If I select that I get the option to set match_type , which is described as Match type Optionally set up no... See more...
I recently noticed that the UI for lookup definitions now has an advanced checkbox. If I select that I get the option to set match_type , which is described as Match type Optionally set up non-exact matching of a comma-and-space-delimited field list. Format is (). Available values for match_type are WILDCARD and CIDR. so I added a wildard match for my lookup field IP to my lookup definition for tools : match_type=WILDCARD (IP) (note, I tried CIDR , too, with similar results) and in the lookup file tools.csv , I had an entry with a * IP: 10.10.35.* Tool: Splunk but when try to use it, I do not get a match: |makeresults |eval IP="10.10.35.9" | lookup tools IP This did not return the Tool field, although if I pass it a matching string it does: |makeresults |eval IP="10.10.35.*" | lookup tools IP gets me back tool = Splunk is there something that I am misunderstanding about the UI based lookup wildcard? Something else that I should be doing?
Hi all, I've been working on getting the number of active VPN users from our ASA logs by a simple query to get the latest event for a user+IP and checking if its a vpn_start event, and counting the... See more...
Hi all, I've been working on getting the number of active VPN users from our ASA logs by a simple query to get the latest event for a user+IP and checking if its a vpn_start event, and counting the total. Premise being that if a start event has been last logged, the session is still currently active: index=ciscoasa (eventtype="cisco_vpn_start" OR eventtype="cisco_vpn_end") | dedup user, src_ip sortby -_time | eval Active=if(eventtype="cisco_vpn_start",1,null) | stats count(Active) This works fine at a point in time when i run the search/refresh dashboard etc. however i want to be able to timechart this over a day/week to show me how many active connections i have at different intervals of the day. i.e. at 7am - 10 active sessions 7:30am - 15 active sessions 8am - 20 active sessions 8:30am - 30 active sessions I only want it to show the active sessions at that particular point in time, not how many sessions were started/stopped in the interval prior, so some way of "executing" the search at different times and mapping the results. Or if there's a better way. Any thoughts or suggestions?
I am trying to create a clear button for my dashboard but none of the XML I have written has worked. I have one date and time drop down and two drop down to follow. After those I have six text input ... See more...
I am trying to create a clear button for my dashboard but none of the XML I have written has worked. I have one date and time drop down and two drop down to follow. After those I have six text input fields then a submit button. I am trying to have a clear or reset button to follow to remove the text from the input. Does anyone have any thoughts on how this should be done? <done> <unset token="update_user_tok"></unset> <unset token="update_index_tok"></unset> <unset token="update_date_tok"></unset> <unset token="update_exptime_tok"></unset> </done> <fieldset resetButton="true" autoRun="false">
Hi, We need to use SSL while build connection with ms ssql server, Pls tell me how to config SSL in db connector app? how to import the certificates?
When I execute job inspector on IDX and SH under the indexer cluster environment, are the results same? Do they have same SID for one search?
Hello All, I am trying to run a search query via API's and getting errors. I am trying to utilize AND/OR and NOT IN operators. The query is getting results from Splunk UI but it is not working via AP... See more...
Hello All, I am trying to run a search query via API's and getting errors. I am trying to utilize AND/OR and NOT IN operators. The query is getting results from Splunk UI but it is not working via API Calls. QUERY: index=node message=abc appId="xyz" items.x_id != "" OR items.data.ed_id!=“” API CALL: curl -k -u username:password https://host:8089/servicesNS/admin/search/search/jobs/export --data-urlencode search="search index=node message=abc appId="xyz" items.x_id != "" OR items.data.ed_id!=“” earliest=01/27/2020:0:0:0 latest=01/28/2020:0:0:0" -d output_mode=xml -o test1.xml I have tried multiple combinations with quotes and no quotes but was not able to figure out. Your help and guidance would be greatly appreciated.
I have an event as below: Mar 30 16:59:08 vg1 : %ASA-4-113019: Group = EMPLOYEE, Username = roys86, IP = ...**, Session disconnected. Session Type: SSL, Duration: 7h:18m:21s, Bytes xmt: 408659006,... See more...
I have an event as below: Mar 30 16:59:08 vg1 : %ASA-4-113019: Group = EMPLOYEE, Username = roys86, IP = ...**, Session disconnected. Session Type: SSL, Duration: 7h:18m:21s, Bytes xmt: 408659006, Bytes rcv: 162000348, Reason: User Requested Now, I would like to fetch the values for the fields Session Type, Duration, Bytes xmt, Bytes rcv, Reason I would also like to rename some the fields after fetching the data. Thanks in advance!!
Hi, Very new to splunk and dont even know what to search. If you will see every customer if successfully process will write 2 events and if not only 1 event (Start) How to find the customer ... See more...
Hi, Very new to splunk and dont even know what to search. If you will see every customer if successfully process will write 2 events and if not only 1 event (Start) How to find the customer which has only start event and not end event? My log writes like below- TIMESTAMP Customer1 Start TIMESTAMP Customer1 End TIMESTAMP Customer2 Start TIMESTAMP Customer2 End TIMESTAMP Customer3 Start
Sorry if this is a repetitive question (I didn't see anyone having this issue in the question board). I went through the instructions and did a test of eventtype=pan but it does not return any data;... See more...
Sorry if this is a repetitive question (I didn't see anyone having this issue in the question board). I went through the instructions and did a test of eventtype=pan but it does not return any data; However, when doing a eventtype=*, I see the logs and the different source types (pan:traffic, pan:userid, etc.) - the time is correct comparable to the firewall and Splunk Server. Attempting to filter explicitly on one of those sourcetypes returns no data. In the Palo Alto Networks app, I do see some data like SaaS Applications. When going into File Activity, I see top apps and bytes transfered over time but everything else states "no results found" and the same for all fields in the Web Activity (even searching with "All time"). The firewall configuration comes up as well. This is a newly installed Splunk server (including the newly noob person using Splunk) that I'm using for home use for learning so any assistance would be greatly appreciated! Thank you in advance for your time and any assistance you could provide.
Hope everyone is keeping safe. I'm following this document https://docs.splunk.com/Documentation/Splunk/latest/Forwarding/Routeandfilterdatad (Discard specific events and keep the rest) The fir... See more...
Hope everyone is keeping safe. I'm following this document https://docs.splunk.com/Documentation/Splunk/latest/Forwarding/Routeandfilterdatad (Discard specific events and keep the rest) The first app is working as expected, however when I've created a second app the filtering is not working Both apps send data to same index, but the apps are on different servers and different logs. we are using Universal Forwarders App1 [ ~/etc/deployment-apps/app1/local] $ cat props.conf [uLinga] TRANSFORMS-set= setnull,setparsing [ ~/etc/deployment-apps/app1/local] $ cat transforms.conf [setnull] REGEX = . DEST_KEY = queue FORMAT = nullQueue [setparsing] REGEX = INFRASFT DEST_KEY = queue FORMAT = indexQueue App2 [ ~/etc/deployment-apps/app2/local] $ cat props.conf [Aux] TRANSFORMS-set = setnull,setparsing [ ~/etc/deployment-apps/app2/local] $ cat transforms.conf [setnull] REGEX = . DEST_KEY = queue FORMAT = nullQueue [setparsing] REGEX = INFO|ERROR|WARN DEST_KEY = queue FORMAT = indexQueue Thank you
Hello Members, We have a requirement to come up with a hardware solution for indexing a relatively small ammount of data. 20 GB per day. I seen considerable documention on the Splunk forum and site... See more...
Hello Members, We have a requirement to come up with a hardware solution for indexing a relatively small ammount of data. 20 GB per day. I seen considerable documention on the Splunk forum and site. Most infomation is oriented towards larger amounts of data - I would assume that our disk subsystem run run an less that 800 IOPS. For reference we would have less than 4 users so we could run a combined instance. Memory would be from 32GB to maybe 128GB, and we should have 8 to 12 cores and 64 bit > 2GHz. (yes we could virtualize this deployment). I am open for tips and suggestions, Eholz1