All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello I'm using Splunk cloud, i have jenkins logs indexed to my system but for some reason breaks I took an output example and add it to Splunk with the "Add Data" option and there it looks ok bu... See more...
Hello I'm using Splunk cloud, i have jenkins logs indexed to my system but for some reason breaks I took an output example and add it to Splunk with the "Add Data" option and there it looks ok but when im searching for the sourcetype it is still broken. What is the best way to parse jenkins logs ? this is my sourcetype configuration :   [ console_logs ] CHARSET=UTF-8 LINE_BREAKER=([\r\n]+) NO_BINARY_CHECK=true SHOULD_LINEMERGE=true category=Structured disabled=false pulldown_type=true   and i want it to be shown with the bulks :   <time> Started by user <time> Finished:    
Hello, I'm currently exploring the integration of Splunk with SAP Analytics Cloud for our data analysis and visualization needs. While I've found some documentation on the topic, I'm looking for pra... See more...
Hello, I'm currently exploring the integration of Splunk with SAP Analytics Cloud for our data analysis and visualization needs. While I've found some documentation on the topic, I'm looking for practical advice and insights from those who have successfully implemented this integration. Specifically, I'd like to know: What are the key considerations when setting up the integration between Splunk and SAP Analytics Cloud? Are there any best practices or recommendations for optimizing data transfer and visualization between the two platforms? How can I ensure that real-time data from Splunk is effectively utilized in SAP Analytics Cloud for timely decision-making? Are there any common challenges or pitfalls I should be aware of during this integration process, and how can I mitigate them? I have checked  https://community.splunk.com/t5/Community/ct-p/en-us/SAP Analytics Cloud Course for guidance.  If you have experience with this integration or can point me to valuable resources, I would greatly appreciate your insights. Thank you!
Dear All, I have a dashboard with Choropleth map presenting established connection from various countries. | index=*** sourcetype=***  bla bla  | bla bla bla | iplocation IP | table Time Usernam... See more...
Dear All, I have a dashboard with Choropleth map presenting established connection from various countries. | index=*** sourcetype=***  bla bla  | bla bla bla | iplocation IP | table Time Username IP Country | stats count by Country | eval count=case(count < 10, "1:Less than 10", (count > 10 AND count <= 20), "2:Between 10 to 20", (count > 20 AND count <= 50), "3:Between 21 to 50", (count > 51 AND count <= 100), "4:Between 51 to 100", (count > 100 AND count <= 500), "5:Between 101 to 500",(count > 500), "6:More than 500") | sort +count | geom geo_countries featureIdField=Country   In the legend, I see colours with the count of established connections. May I edit the legend in a way that the name of each country will be shown up and along with count? I have spent many days googling but unfortunately I am unable to get the answer. I also tried with Cluster Map visualization, but unfortunately, no luck for me. Thank you very much in advanced for your advices. | index=*** sourcetype=***  bla bla  | bla bla bla | rename ext_device as VPN, ext_when as Time, ext_Username as Username, ext_IP_addr as IP | iplocation IP | geostats count by Country
Hello I've encountered an issue in my Splunk environment that's been causing some headaches. When running a search, I receive the following error message: "Search Peer has the following message: 'Er... See more...
Hello I've encountered an issue in my Splunk environment that's been causing some headaches. When running a search, I receive the following error message: "Search Peer has the following message: 'Error in 'SearchParser': The search specifies a macro 'my_macro' that cannot be found.'" This error seems to be related to a missing macro called 'my_macro,' but I'm unsure why this is happening and how to resolve it. I've checked my search query, and it appears to be correct. Can anyone provide some guidance on what might be causing this error and how I can go about resolving it? Any help or insights would be greatly appreciated! Thank you.
Hi Team, I am trying to schedule a alert base on threshold for 2 time window. If fall to 0 events between 23:00 to 07:00 If fall to less then 20 events between   07:00 to 23:00    is it poss... See more...
Hi Team, I am trying to schedule a alert base on threshold for 2 time window. If fall to 0 events between 23:00 to 07:00 If fall to less then 20 events between   07:00 to 23:00    is it possible to define 2 threshold like above ? in one alert index=ABC sourcetype=XYZ login |stats count |where count =0 between23:00 to 07:00 index=ABC sourcetype=XYZ login |stats count |where count <=20 between 07:00 to 23:00  Please advise. Thank you
In outputs.conf you can configure compressed = <boolean> to compress the data, but the documentation doesn't specify how the compression is done。   There is also no parameter specifying the compres... See more...
In outputs.conf you can configure compressed = <boolean> to compress the data, but the documentation doesn't specify how the compression is done。   There is also no parameter specifying the compression method.   So my question is what compression is used by default, and whether there is any documentation on it to show that
We have a splunk enterprise installation where everything is on the same server/install (searchhead etc.). At the moment we have a script that shuts down the splunk services and then zip's the whole... See more...
We have a splunk enterprise installation where everything is on the same server/install (searchhead etc.). At the moment we have a script that shuts down the splunk services and then zip's the whole /opt/splunk/ folder and copies it to a NAS. Problem is that this action takes about 1,5h and during that time we won't be able to reach splunk (since the service is shutdown). Would it be possible to do this "on the fly" instead of shutting down the service and just do the zip of the entire folder when it is "alive".  My thinking is that this won't be optimal since then bucket files will be "open" etc.   But what are your take on this? maybe another better solution?  
[search] |stats count by ClientName Outcome example: Client1 Positive count Client1 Negative count Client2 Positive count Client2 Negative count Client 2 Unknown count How do I get t... See more...
[search] |stats count by ClientName Outcome example: Client1 Positive count Client1 Negative count Client2 Positive count Client2 Negative count Client 2 Unknown count How do I get the percentage for each client outcomes? Client1 Positive count % Client1 Negative count  % Client2 Positive count % Client2 Negative count % Client2 Unknown count %  
Hi all,   So here is the deal, I have to prepare some( a lot) db_outputs(using db_connect), however the corresponding tables are not yet existing. Colleagues responsible for that are on different t... See more...
Hi all,   So here is the deal, I have to prepare some( a lot) db_outputs(using db_connect), however the corresponding tables are not yet existing. Colleagues responsible for that are on different tasks. I would like to configure the exports in advance, so that once the tables are ready, the output would just flow. (and I might not be able to work on that later)  I did not manage to find a way using the GUI for that , as it always requires every step of the the way to be fulfilled, so even if I had data waiting for me, I would not be able to prep the field matching. So my idea is to configure them in db_outputs.conf, then a restart of the HF, should (or at least i think) be the solution. However, there is this.... customized_mappings = <string> # required # Specifies the output data name (fieldx) and database column number (1...n) mappings. # The expected format is: # field1:column1:type1,field2:column2:type2,…,fieldN:columnN:typeN And I do not know where to get the values for the types( I already know what field will be varchar, timestamp etc.... what I do not know is numeric representation of the field types). So it is a two fold question, 1) does anybody know this numeric to field type mapping (for example varchar=12, unassigned integer=4.... these I got from previous tables) ? 2) has anyone configured outputs in advance, before the corresponding table is even created, and does it start later automatically?  have fun! rd
Hi, I am importing a csv file in Splunk Enterprise that has semicolon as field separator but Splunk does not correctly parses it. For instance this field --> SARL "LE RELAIS DU GEVAUDAN";;;"1 is co... See more...
Hi, I am importing a csv file in Splunk Enterprise that has semicolon as field separator but Splunk does not correctly parses it. For instance this field --> SARL "LE RELAIS DU GEVAUDAN";;;"1 is considered as a whole and is not getting splitted. Do you know which settings should I configure in the file importer wizard in order to import it? Thank you Kind regards Marta  
Hi Team, I am trying to monitor the .NET Windows service application and I have followed the instructions as per in the below link. https://docs.appdynamics.com/appd/23.x/latest/en/application-mo... See more...
Hi Team, I am trying to monitor the .NET Windows service application and I have followed the instructions as per in the below link. https://docs.appdynamics.com/appd/23.x/latest/en/application-monitoring/install-app-server-agents/net-agent/install-the-net-agent-for-windows/configure-the-net-agent-for-windows-services-and-standalone-applications I am not a developer and don't have the source code (Namespace/class/functions) of my Windows service. So I couldn't add Custom Poco entry points. How do I discover my .NET functions from Windows Service?  Can anyone help me... Regards, Durai
I have a CSV file which has a some columns. There is one column named GROUP and the data in that column are in the format {'a':1,'b':2}, there can be any number of strings. There is another column VA... See more...
I have a CSV file which has a some columns. There is one column named GROUP and the data in that column are in the format {'a':1,'b':2}, there can be any number of strings. There is another column VALUE and the data are in the format {'a','b'}. I want to check if the strings in VALUE column are present in GROUP column and create a separate column named DATA with the strings not present. I am not sure how to achieve it in Splunk using commands. Can anyone have any suggestions? Example: Group                     Value              Data {'a':'1','b':'2'}            {'a','b'} {'a':1,'b':'2'}              {'a'}                {'b'} {'a':1,'b':'2','c':'3'}    {'a'}                {'b','c'} There are many columns like these and strings present in GROUP column can be more.  
i got this error failed to: delete_local_spark_dirs on and failed to:force_kill_spark_jvms on when i run /opt/caspida/bin/Caspida start-all . Any idea how i can resolve this?   -I was not able to a... See more...
i got this error failed to: delete_local_spark_dirs on and failed to:force_kill_spark_jvms on when i run /opt/caspida/bin/Caspida start-all . Any idea how i can resolve this?   -I was not able to access the web ui and I was running the cmd (on UBA manager) /opt/caspida/bin/Caspida stop-all . There was an error . And when I tried to run the start-all, it shows the same error.
We have setup one alert which should trigger for every 1 hour When we run the alert query it is showing up the results but we did not received mail There is no diff in index and event time In s... See more...
We have setup one alert which should trigger for every 1 hour When we run the alert query it is showing up the results but we did not received mail There is no diff in index and event time In scheduler logs it is showing status as success but i don't see python logs and alert did not get fired   What could be the issue for not receiving the mail from alert.
Dear Support, I have 2 indexes (indexA,  indexB) and one receiving server with 2 different ports (10.10.10.10:xx, 10.10.10.10:yy). I need my indexer to forward indexA to 10.10.10.10:xx and indexB t... See more...
Dear Support, I have 2 indexes (indexA,  indexB) and one receiving server with 2 different ports (10.10.10.10:xx, 10.10.10.10:yy). I need my indexer to forward indexA to 10.10.10.10:xx and indexB to 10.10.10.10:yy. What is best way to achieve it? I did two different apps with outputs, props, transforms and it does not work. I tried one app with LB and it does not work either. Example of outputs.conf: [tcpout] defaultGroup = group1, group2 [tcpout:group1] server = 10.10.10.10:xx forwardedindex. = ??? [tcpout:group2] server = 10.10.10.10:yy forwardedindex. = ???   Is it a good way to do it? How should forwardedindexes config look like ? What about props and transforms?   I would appreciate any help.   thanks pawel
Hello I want to find in subsearch autonomous_system for the IP address which I provided (in this example for 1.1.1.1) . Next, based on the name of the autonomous_system returned from subsearch, I... See more...
Hello I want to find in subsearch autonomous_system for the IP address which I provided (in this example for 1.1.1.1) . Next, based on the name of the autonomous_system returned from subsearch, I want to find all IP addresses connecting to my network that belongs to that autonomous_system.  For now I have something like that: index=firewall src_ip=* | lookup asn ip as src_ip [search index=firewall  src_ip=1.1.1.1 | fields src_ip | lookup asn ip as src_ip | rename autonomous_system AS subsearch_autonomous_system | dedup subsearch_autonomous_system] | stats values(src_ip) by subsearch_autonomous_system But when I run this search I got error: Error in 'lookup' command: Cannot find the source field '(' in the lookup table 'asn'. Can anyone help me with that? Regards Daniel
I have use case to use the ML feature to detect  the  anamoly in comm sent from each ID. I was trying to get the same from predict function, but there is multiple ID's and I can't set an alert/repor... See more...
I have use case to use the ML feature to detect  the  anamoly in comm sent from each ID. I was trying to get the same from predict function, but there is multiple ID's and I can't set an alert/report individually for all ID's. How I can use the same, Please help. Query which I am trying: index=indexhc source=hcdriver sourcetype="assembly" appname="marketing" ID IN (abc,xyz,qtr,jyk,klo,mno,ghr)  | timechart span=1d count as commSent by ID | predict commSent as predicted_commSent algorithm=LLP holdback=0 future_timespan=24 | eval anamoly_score=if(isnull(predicted_commSent),0,abs(commSent - predicted_commSent)) |table _time,ID,commSent,predicted_commSent,anamoly_score Above query is not giving any output,it seems predict command doesnot work with multiple columns. Please suggest.
Hi. I've tried to get Splunk to understand syslog messages coming from a Cisco Mobility Express setup. Mobility Express (ME) is the built-in controller solution into, in this setup, 3 AP3802I acces... See more...
Hi. I've tried to get Splunk to understand syslog messages coming from a Cisco Mobility Express setup. Mobility Express (ME) is the built-in controller solution into, in this setup, 3 AP3802I access points running 8.10.171.0 I have been successful at getting and displaying data from a C2960L-8PS switch running IOS 15. But not from any access point (AP). I've setup syslogging from the ME directly to a single instance Splunk demo lab running on Ubuntu with rsyslog. I can see data being logged into /data/syslog/192.168.40.20/ -rw-r--r-- 1 syslog syslog 9690 Sep 4 15:54 20230904-15.log -rw-r--r-- 1 syslog syslog 41100 Sep 4 16:58 20230904-16.log -rw-r--r-- 1 syslog syslog 9192 Sep 4 17:53 20230904-17.log Example of syslog messages are: 2023-08-29T05:48:04.090627+00:00 <133>SampleSite: *emWeb: Aug 29 07:48:03.431: %AAA-5-AAA_AUTH_ADMIN_USER: aaa.c:3334 Authentication succeeded for admin user 'example' on 100.40.168.192 2023-09-04T17:01:52.684140+02:00 <44>SampleSite: *apfMsConnTask_0: Sep 04 17:01 :52.495: %APF-4-PROC_ACTION_FAILED: apf_80211k.c:825 Could not process 802.11 Ac tion. Received RM 11K Action frame through incorrect AP from mobile station. Mob ile:1A:4A:FA:F9:BA:C6. 2023-09-04T17:01:52.718781+02:00 <44>SampleSite: *Dot1x_NW_MsgTask_0: Sep 04 17 :01:52.530: %LOG-4-Q_IND: apf_80211k.c:825 Could not process 802.11 Action. Rece ived RM 11K Action frame through incorrect AP from mobile station. Mobile:1A:4A: FA:F9:BA:C6. I've installed TA-cisco_ios from Splunkbase. In the top of my etc/apps/search/local/inputs.conf I've added: [monitor:///data/syslog/udp/192.168.40.20] disabled = false host = ciscome.example.net sourcetype = cisco:wlc #sourcetype = cisco:ap index = default For switches cisco:ios works fine, but I cannot get cisco:wlc or cisco:ap to process data it seems. Has anyone used Cisco Mobility Express with Splunk and gotten anything usefull out of the logs? Am I doing it right? Thanks for any tips.
Hi, I want to create a table in the below format and provide the count for them. I have multiple fields in my index and I want to create a table(similar to a excel pivot) using three fields App ... See more...
Hi, I want to create a table in the below format and provide the count for them. I have multiple fields in my index and I want to create a table(similar to a excel pivot) using three fields App Name, Response code and Method  index=abcd  | chart count  over App Name by Response code  --> Above works for me but I can create a table only using 2 fields.  How to create a table something as below format  with 3 fields or more than 3. Please could you help.  APP NAME RESPONSECODE RESPONSECODE RESPONSECODE 200 400 400 GET POST PATCH GET POST PATCH GET POST PATCH APP1                   APP2                   APP3                   APP4                   APP5                   APP6                  
Hi, How can I determine the index responsible for the majority of Splunk license consumption when analyzing security data in ES ?