All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Lets say I have 3 lookups >>> a-list.csv, b-list.csv, c-list.csv and the lists only have 1 column header = Name Alice is on a-list Bob is on b-list Charles is on c-list There are lots of people o... See more...
Lets say I have 3 lookups >>> a-list.csv, b-list.csv, c-list.csv and the lists only have 1 column header = Name Alice is on a-list Bob is on b-list Charles is on c-list There are lots of people on each list and the lists are dynamic and updated. I have a request to create a Combined_Master Lookup (where C_M-list.csv = a-list.csv + b-list.csv + c-list.csv), where the list contains NAME, FLAG fields such as NAME,FLAG Alice, a-list Bob, b-list Charles, c-list So far I use the following query to build the C_M-list.csv, where there is a Name and Flag appended to each name (which indicate which list the person is from) BUT I am wondering if there is a better way...   | inputlookup a-list.csv | eval FLAG = "a-list" | inputlookup b-list.csv append=true | eval FLAG = coalesce(FLAG, "b-list") | inputlookup c-list.csv append=true | eval FLAG = coalesce(FLAG, "c-list") |.... <rest of the query follows>....   My desired outcome is a M_C-list.csv Alice,a-list Bob,b-list Charles,c-list Any suggestions or improvements appreciated. TY!
How do I create an appKey for EUM after an application has already been created with the same name?
Hello Community, 2 part question: First, how to use an IF / ELSE statement, secondly, how to specify the JSON elements in the query. Any examples or helpful How would I do a search query that depen... See more...
Hello Community, 2 part question: First, how to use an IF / ELSE statement, secondly, how to specify the JSON elements in the query. Any examples or helpful How would I do a search query that depending on the log source, pulls different fields?   For example index=myIndex | IF (source=Source1 OR sourcetype=sourceTypeB) pull JSON element1, element2, etc | ELSE IF logSource=logSource2 pull fieldsname1, fieldname2, etc
Hi, I have the following problem and so far i couldn't find a solution on my own: I build my own custom visualization based on Plotly JS here: https://splunkbase.splunk.com/app/5387/ The app works... See more...
Hi, I have the following problem and so far i couldn't find a solution on my own: I build my own custom visualization based on Plotly JS here: https://splunkbase.splunk.com/app/5387/ The app works as intended, but I cannot use it together with other dashboard visualizations that contain datetime fields, because for some reason the datetime values are shown as "undefined undefined". For example, if I place a line chart on a dashboard together with my custom visualization the line chart will render like this: Somehow my custom visualization seems to overwrite something, that causes the line chart to not show the datetime values. The overall shape of the chart is fine, so the datetime values are there, they are just not displayed correctly.  It doesn't have to be a line chart, even in the search app the _time field is shown like this: I went through my custom visualizations code, but could't find anything related to this problem. Maybe Plotly.js is the problem? Am I missing something in the Splunk documentation?
Anyone using the Gemini KV Store Tools app (https://splunkbase.splunk.com/app/3536/)? If so, have you gotten the backup files retention to work?  This app was recommended to me by Splunk Support. A... See more...
Anyone using the Gemini KV Store Tools app (https://splunkbase.splunk.com/app/3536/)? If so, have you gotten the backup files retention to work?  This app was recommended to me by Splunk Support. According to the documentation, "Retention deletion takes place when running kvstorebackup."  But regardless of what value I use in the 'Age-Based Retention (Days)' field in the setup UI page, the backup files are never cleaned up.  I'd rather not use a cronjob to clean them up as our Splunk instance is IaC and instances/servers can be terminated/rebuilt at any time (meaning the cron tab would be lost). Thanks!
Which version of spunk Universal forwarder I need to install for AIX 5.1 and AIX 6.1 version OS machines. We have Splunk 7.3.3.
Given a Splunk environment with SQS (S3) as the data source, is it possible to "filter" messages at  so that we can separate each file (based on its prefix) to different Splunk indexes? Put another ... See more...
Given a Splunk environment with SQS (S3) as the data source, is it possible to "filter" messages at  so that we can separate each file (based on its prefix) to different Splunk indexes? Put another way, if we have 25 indexes, corresponding to 25 different data types in an S3 bucket, and we want to use S3 and SQS, can we configure Splunk to conditionally index the data based on a path/prefix match pattern applied to each SQS message?  
Hi at all, I have to capture strems coming from some tap devices in my network using Steam App. Which are the minimal hardware requirements for the Forwarder machine in both situations: using an ... See more...
Hi at all, I have to capture strems coming from some tap devices in my network using Steam App. Which are the minimal hardware requirements for the Forwarder machine in both situations: using an Universal Forwarder, using anIndipoendent Forwarder? How many taps can be connected to a single Forwarder, is it depends on the NICs? Can the Forwarder be installed on a Virtual machine or must it be installed on a physical machjine? Ciao and thanks. Giuseppe
What is the minimum hardware requirement for installing heavy forwarder with DBconnect app which is sending data to Splunk Cloud ?
Hello there i have inputs.conf #[monitor:///opt/splunk/etc/apps/my_app/bin/out/.../*.gz] #disabled=0 #index=security_my_index #sourcetype=fzzz #source=fdr #interval=60   this is only indexin... See more...
Hello there i have inputs.conf #[monitor:///opt/splunk/etc/apps/my_app/bin/out/.../*.gz] #disabled=0 #index=security_my_index #sourcetype=fzzz #source=fdr #interval=60   this is only indexing  all the files under /opt/splunk/etc/apps/my_app/bin/out/data/**   but data is not getting indexed from below locations /opt/splunk/etc/apps/my_app/bin/out/fdrv2/aidmaster /opt/splunk/etc/apps/my_app/bin/out/fdrv2/managedassets any idea on this?
Hi Splunkers ,   We are collecting logs from multiple devices/application and sent to one single S3 bucket and they are separated into different folders inside. We are trying to ingest data into S... See more...
Hi Splunkers ,   We are collecting logs from multiple devices/application and sent to one single S3 bucket and they are separated into different folders inside. We are trying to ingest data into Splunk using the AWS Add-on via SQS based S3 input type. SQS based S3  input type will have only queue name or queue URL  to be passed. Since we have only one bucket(with many folders having data from different source which needs to be send to different index) .Challenge we face is for creating separate inputs for each folders in the bucket and send it to different index. Since SQS can be subscribed to only at Bucket level and not folder level(to my knowledge).  Is there any config settings available in the Splunk AWS addon like regex or keys to read only a specific folder or skip folders so that i can a map a folder to an index. Thanks in Advance  
Hi all,   i have used the SVG Custom Visualisation to create a nice picture. Now i would like to have a drilldown or at least a possibility to set a token, when a part of the picture is clicked on... See more...
Hi all,   i have used the SVG Custom Visualisation to create a nice picture. Now i would like to have a drilldown or at least a possibility to set a token, when a part of the picture is clicked on. Is there a way to achieve this? Any hint would be great.   Best regards Tomasz
Has anyone installed and configured Microsoft O365 Email Add-on for Splunk? I have a few concerns such as using a transport rule to bcc every single message sent through our tenant to a single accou... See more...
Has anyone installed and configured Microsoft O365 Email Add-on for Splunk? I have a few concerns such as using a transport rule to bcc every single message sent through our tenant to a single account.  During the day that is about 100k messages an hour. It's a lot all going to one account. We will almost certainly brush up against the 1.4 million daily limit mentioned in the app's description. Just curious to see how this add-on has worked for others and any issues they've had/seen. Thx
Hi, i have log like this [Information] WebService Call CheckVehicle : country=111111, licensePlate=12DUMMY And i would like to extract only licensePlate using maybe rex. Thank you
I am rebuilding a SH cluster from scratch. I've followed the documentation carefully to this point. I have the shcluster captain bootstrapped and splunk show shcluster-status shows the captain as the... See more...
I am rebuilding a SH cluster from scratch. I've followed the documentation carefully to this point. I have the shcluster captain bootstrapped and splunk show shcluster-status shows the captain as the only member, but the bootstrapping process failed to add my member nodes due to comms errors. Pretty sure I've got those fixed now.  When I do splunk add shcluster-member -current_member_uri https://ip-address-of-captain:8089 on a member node, it tells me:      current_member_uri is pointing back to this same node. It should point to a node that is already a member of a cluster.     Obviously, I have checked and re-checked the uri, which I believe is correct (https://ip-address-of-captain:8089), and that is set right in server.conf on both sides. There is no IP conflict and the servers have no issue communicating.  If I run splunk add shcluster-member -new_member_uri https://ip-address-of-member:8089 from the captain, it tells me:     Failed to proxy call to member https://ip-address-of-member:8089     Google tells me this can be an issue with the pass4SymmKey, and to that end, I have updated the pass4SymmKey on both sides and restarted the instances a few times, to no avail.  I'm stumped. Where did I go wrong that I can't get these search heads to cluster up nicely?
Hello All,   I have and seen many others loading wrong splunk dashboard. Knowing that splunk dashboards at times contains bunch of queries, so due to that a) it impact performance b) when we try l... See more...
Hello All,   I have and seen many others loading wrong splunk dashboard. Knowing that splunk dashboards at times contains bunch of queries, so due to that a) it impact performance b) when we try loading right one then it goes in wait mode until that old wrong dashboard loads.   So is there any good way to stop loading for entire dashboard once opened? Thanks Pathik 
Hi we have integrated the Azure Event Hub with Splunk and we are trying to understand the content of the event we are getting   what is the event ID  ?  how the events are separated  ? we are get... See more...
Hi we have integrated the Azure Event Hub with Splunk and we are trying to understand the content of the event we are getting   what is the event ID  ?  how the events are separated  ? we are getting 1 event with few correlationId values    the source type is a default app mscs:azure:eventhub   
None of my configurations in props.conf is working and entire file is coming as single event HEADER_FIELD_LINE_NUMBER = 1 is also not working Data is server patching data CHG0030338_Linux_Executiv... See more...
None of my configurations in props.conf is working and entire file is coming as single event HEADER_FIELD_LINE_NUMBER = 1 is also not working Data is server patching data CHG0030338_Linux_Executive_Summary.log Change_Request_Number$$@@$$Planned_Start_Date$$@@$$Planned_End_Date$$@@$$CR_Type$$@@$$Type$$@@$$Application$$@@$$Total_Servers$$@@$$Servers_Successfully_Patching$$@@$$Servers_Failed_Patching$$@@$$Servers_Skipped_Patching CHG0030338$$@@$$9/9/2020 5:45:00 PM (GMT)$$@@$$9/9/2020 5:45:00 PM (GMT)$$@@$$Linux$$@@$$Total$$@@$$1$$@@$$1$$@@$$1$$@@$$0$$@@$$0 CHG0030338$$@@$$9/9/2020 5:45:00 PM (GMT)$$@@$$9/9/2020 5:45:00 PM (GMT)$$@@$$Linux$$@@$$Rhel$$@@$$0$$@@$$0$$@@$$0$$@@$$0$$@@$$0 CHG0030338$$@@$$9/9/2020 5:45:00 PM (GMT)$$@@$$9/9/2020 5:45:00 PM (GMT)$$@@$$Linux$$@@$$CentOS$$@@$$0$$@@$$0$$@@$$0$$@@$$0$$@@$$0 CHG0030338$$@@$$9/9/2020 5:45:00 PM (GMT)$$@@$$9/9/2020 5:45:00 PM (GMT)$$@@$$Linux$$@@$$Suse$$@@$$1$$@@$$1$$@@$$1$$@@$$0$$@@$$0   inputs.conf [monitor:///opt/splunkforwarder/patch/*Summary*.log] disabled = 0 index = patch_dummy sourcetype = patch_summary crcSalt = <SOURCE>   props.conf [patch_summary] SHOULD_LINEMERGE = false LINE_BREAKER = ([\r\n]+) SHOULD_LINEMERGE = false HEADER_FIELD_LINE_NUMBER = 1 TRANSFORMS-sourcetype = patch_sourcetype TRANSFORMS-route = patch-monitoring-route transforms.conf [patch-monitoring-route] REGEX = . DEST_KEY =_TCP_ROUTING FORMAT = acn-dev1-route-group [patch_sourcetype] REGEX = . DEST_KEY = MetaData:Sourcetype FORMAT = sourcetype::patch  
1. Can DBconnect on Heavy Forwarder be configured to pull data from tables in DB2 database and pushed to Splunk cloud ? 2. Does Heavy forwarder with DBconnect app needs to be installed on same serve... See more...
1. Can DBconnect on Heavy Forwarder be configured to pull data from tables in DB2 database and pushed to Splunk cloud ? 2. Does Heavy forwarder with DBconnect app needs to be installed on same server where DB2 database is installed to pull data? or HF can be installed on different server ? In this case, DB2 database is on Windows server.
Hello Splunk Team. Kindly asking your assistance and recommendation for EC2 instances. We are working with Splunk services and forwarding the data from various AWS accounts to the on-prem datacenter... See more...
Hello Splunk Team. Kindly asking your assistance and recommendation for EC2 instances. We are working with Splunk services and forwarding the data from various AWS accounts to the on-prem datacenter. Now we have a task to scale the EC2 instances because of the enormous increase of the data that we will be sending. We are using the Autoscaling group and three EC2 instances(c5.4xlarge) with installed and configured Splunk Heavy Forwarder. We are not using any indexers and not storing the data, just forwarding. Currently, we are not forwarding much data ~ 100Mb per day, but it will be increased up to 70Gb per day, and the question is what the proper way of scaling AWS EC2 instances. As I mentioned we are using the Autoscaling group and we can configure to scale-out instances based on the memory usage since Splunk requires a lot of RAM, but at the same time, we don’t quite sure about the timing of scaling and data flow. Data might be sent based on triggers in another AWS Account and we cannot predict that, so it might be a good idea to just scale the instances based on the information of instances performance and network flow. So currently each instance acquiring around 25-30% of the 16 Gb Ram without any spikes. I calculated an approximate prediction of how much Ram will be required for this upgrade for each instance and noted those instance types: r4.4xlarge 16 58 122 GiB r4.8xlarge 32 97 244 GiB r5.4xlarge 16 70 128 GiB  r5.8xlarge 32 128 256 GiB So, what do you think r4/r5 instance types would be able to handle such data forwarding increase or we need to find some other proper solution? Maybe you make some recommendations based on similar cases. The main question is how much RAM Heavy Forwarders will consume based on this information. Thanks!