All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello Everyone, I have an environment consisting of three VPC's (say x, y, and z). Each VPC holds Linux, Windows and AWS logs. I have successfully set-up the AWS log ingest using separate indexes (a... See more...
Hello Everyone, I have an environment consisting of three VPC's (say x, y, and z). Each VPC holds Linux, Windows and AWS logs. I have successfully set-up the AWS log ingest using separate indexes (aws_vpcx, aws_vpcy, aws_vpcz). However, I'm struggling to get the Linux/Windows data to index the same way. The unique identifier I'm using is hostnames. The following holds true for all hostnames per VPC, VPC X has hostnames == vpcX*** VPC Y has hostnames == vpcY*** VPC Z has hostnames == vpcZ*** For Linux logs I tried to add the following : Inputs.conf currently has (index=os_vpcX) so the default is for all Linux hosts in VPC X which is why it's not in the props and transforms files below. Currently all VPCs are sending to the os_vpcX index instead of all three and I need to figure out why the below config isn't working. I'm doing this from the cluster master and pushing it to the indexer cluster. props.conf [host::vpcY*] TRANSFORMS-osVpcY = osVpcYTrans [host::vpcZ*] TRANSFORMS-osVpcZ = osVpcZTrans transforms.conf [osVpcYTrans] REGEX = vpcX.+ DEST_KEY = _MetaData:Index FORMAT = os_vpcy [osVpcZTrans] REGEX = vpcY.+ DEST_KEY = _MetaData:Index FORMAT = os_vpcz  My second question is the same but for the Windows add-on..this seems more difficult with the single inputs.conf file having multiple indexes in it. Is there a way for me to specify more than one 'unique' thing about the stanza? For example, this is the default windows inputs.conf containing multiple indexes...I will need the windows index to go to either windows, windows_vpcY, or windows_vpcZ depending on the host that's sending the logs..but then I will also need that same separation for the wineventlog data (wineventlog, wineventlog_vpcY, wineventlog_vpcZ). ###### WinEventLog Inputs for DNS ###### [WinEventLog://DNS Server] disabled = 0 renderXml=true index = wineventlog ###### DHCP ###### [monitor://$WINDIR\System32\DHCP] disabled = 0 whitelist = DhcpSrvLog* crcSalt = <SOURCE> sourcetype = DhcpSrvLog index = windows  Thanks in advance to anyone that can help! 
We are ingesting scom events When an alert is triggered it is assigned an id (the earliest event pictured) and we have created a dashboard of alerts that are in status new. This issue we have is ... See more...
We are ingesting scom events When an alert is triggered it is assigned an id (the earliest event pictured) and we have created a dashboard of alerts that are in status new. This issue we have is some of the alerts have actually been resolved but the logs that show an alert as resolved show the id as "monitoringalertid" not "id" so the dedup "id" isn't working We are having issues joining these alerts to get the latest status and remove alert if it has been solved. The only value to match these events is the id/monitoringalertid. Anyone know a way to match these events. TIA
I am trying to get the free space in % for C,D and E drive. I have below events in splunk.    02/25/2021 08:22:32.272 -0600 collection=LogicalDisk object=LogicalDisk counter="% Free Space" inst... See more...
I am trying to get the free space in % for C,D and E drive. I have below events in splunk.    02/25/2021 08:22:32.272 -0600 collection=LogicalDisk object=LogicalDisk counter="% Free Space" instance=E: Value=4284.377358490566 02/25/2021 08:20:32.264 -0600 collection=LogicalDisk object=LogicalDisk counter="% Free Space" instance=D: Value=98.32841691248771 02/25/2021 08:26:32.298 -0600 collection=LogicalDisk object=LogicalDisk counter="% Free Space" instance=C: Value=43.12314853999153   I am looking for the data like server name  Drive   Free space available xyz                    C:          20% xyz                       30%
There is a Dashboard in JV Firewall named as "Basic Infra Delta Analysis" Location : Basic Infra-> Basic infra delta analysis After selecting the "Addition" or "Removal" from the prompts, last pane... See more...
There is a Dashboard in JV Firewall named as "Basic Infra Delta Analysis" Location : Basic Infra-> Basic infra delta analysis After selecting the "Addition" or "Removal" from the prompts, last panel named as "Additional Analysis" and "Removal Analysis" respectively is showing below error   but when i am clicking lens symbol to see the query   then the query is running and showing the output Query is taking more than 300 seconds. What could be the possible cause? please help if anyone else also faced the same issue.   Thanks in advance
Hi,  I have a field with multiple values, some of them share the same characters at the beginning of the values.  I need to find those with the same values and make it possible to choose how many c... See more...
Hi,  I have a field with multiple values, some of them share the same characters at the beginning of the values.  I need to find those with the same values and make it possible to choose how many characters I want to compare from the beginning of the value.
We are currently having enterprise Splunk, indexer search head all on one server. Planning to migrate to Multi indexer cluster, would like to know will this addon "Splunk Add-on for Salesforce" supp... See more...
We are currently having enterprise Splunk, indexer search head all on one server. Planning to migrate to Multi indexer cluster, would like to know will this addon "Splunk Add-on for Salesforce" support that?    
Hi All, I want to always hide my drop down     <input type="dropdown" token="TransactionID_filter" searchWhenChanged="true"> <label>TransactionID</label> <fieldForLabel>ESMS_Transa... See more...
Hi All, I want to always hide my drop down     <input type="dropdown" token="TransactionID_filter" searchWhenChanged="true"> <label>TransactionID</label> <fieldForLabel>ESMS_TransactionID</fieldForLabel> <fieldForValue>ESMS_TransactionID</fieldForValue> <search> <query>index="int_gcg_apac_pcf_application_dm_169688" cf_org_name="CM-AP-SIT2" cf_space_name="166190_GCESMS" | table ESMS_TransactionID | dedup ESMS_TransactionID</query> <earliest>$timepicker.earliest$</earliest> <latest>$timepicker.latest$</latest> </search> <choice value="*">All</choice> </input>  
Hi There. I know I can use multiple inputs/outputs with separate CAs and even certs to permit different peers to inject data into the Splunk installation. But I have a different situation. I have a... See more...
Hi There. I know I can use multiple inputs/outputs with separate CAs and even certs to permit different peers to inject data into the Splunk installation. But I have a different situation. I have a cluster installation (let's say 4 indexers and 2 search-heads) which are configured to use (RootCA->Intermediate1) chain for cert verification and the servers just present the "final" cert without certification chain. I don't know why it was done this way instead of properly configuring just RootCA for verification and configuring the components to present full certification chain - I "inherited" this installation so it was already like that when I got this. I need to add another indexer to the installation. The problem is that now we have another Intermediate2 CA and I'm getting new certs from that new Intermediate2 CA (which is a subordinate of the same RootCA as the Intermediate1). Is there any reasonable way to avoid full reconfiguration of CAs? Can I provide Splunk - for example - with a set of two different CAs with which it would try to authenticate peer? I know I should just reconfigure all members to "properly" use RootCA but it's a big operation and requires full system downtime. If I could just reconfigure the system piece-by-piece, that would be great.
Hello,  In Google Analytics there are some types of traffic like : search traffic --> to know who visit the site through a web search  referral traffic --> traffic that comes from someone clic... See more...
Hello,  In Google Analytics there are some types of traffic like : search traffic --> to know who visit the site through a web search  referral traffic --> traffic that comes from someone clicking a link to your site from another site direct traffic --> Traffic where the “referrer is unknown,” such as directly typing a URL into the navigation campaigns -->  Traffic from an AD words campaign Is it applicable in Appdynamics EUM? If so, how to make it?
Hi guys. i´m trying to forward some events to another indexer usin my configuration files props.conf, transforms.conf and outputs.conf but the problem is that when I do it I forward all my data and n... See more...
Hi guys. i´m trying to forward some events to another indexer usin my configuration files props.conf, transforms.conf and outputs.conf but the problem is that when I do it I forward all my data and not onlt the index and sourcetype that I want to forward even though I´m sure of applying those filters correctly on my props.conf  What could be happening? Thanks in advance.
Hi, I have a panel in Splunk dashboard which is a table and has two columns. The first column is comment and second is date. I would like to make first column editable on the UI by user and save... See more...
Hi, I have a panel in Splunk dashboard which is a table and has two columns. The first column is comment and second is date. I would like to make first column editable on the UI by user and save/output the same in a lookup. Any leads would be helpful. TIA!
I have my Splunk DB connect app installed on one of the servers with splunk version 8. My search app is installed on another server with Splunk Enterprise version 7, which is considered as the Splunk... See more...
I have my Splunk DB connect app installed on one of the servers with splunk version 8. My search app is installed on another server with Splunk Enterprise version 7, which is considered as the Splunk UI. Do the DB connect app from version 8 communicate with search head with version 7? Because none of my job inputs in DB connect app are sending logs to my search head of version 7. Could anyone please help on this issue.
Hi everyone,   On my Linux machine, which has Splunk Forwarder and Splunk Add-on for Unix and Linux installed, I'm using this command to find the largest files on my server; sudo du -a /var/log | ... See more...
Hi everyone,   On my Linux machine, which has Splunk Forwarder and Splunk Add-on for Unix and Linux installed, I'm using this command to find the largest files on my server; sudo du -a /var/log | sort -n -r | head -n 20 It enlists the first 20 largest files in the /var/log directory.   Now, I would like to do the same using Splunk. Is there a way to edit the inputs.conf file to be able to index the data onto Splunk or is there any type or search I can make use of to achieve this.   Thanks in advance to anyone willing to help.   Reagards, Hisham
For installation of Splunk (UF or Enterprise) with version <=7.0 ,there is no default password "changeme". Installer asks for password during installation. How can I modify the below line in the ... See more...
For installation of Splunk (UF or Enterprise) with version <=7.0 ,there is no default password "changeme". Installer asks for password during installation. How can I modify the below line in the script . /opt/splunkforwarder/bin/splunk edit user admin -password $PASSWORD -auth admin:changeme  How the password can be mentioned during installation on Splunk version7.3.3.?I tried using -auth parameter but no luck.
Hi, We have configured a ECS cluster using EC2, We are developing a java application and planning to deploy the application in ECS task. I would like to get the container metrics, application tracin... See more...
Hi, We have configured a ECS cluster using EC2, We are developing a java application and planning to deploy the application in ECS task. I would like to get the container metrics, application tracing also ec2 node metrics using APPD.Could you please suggest the correct approach along with best practices. 1. Do you have a sidecar Docker image for the Java agent? 2. What are the special configuration is required to configure sidecar? 3. If you have detailed document about this, please share with us.
Hi, My events contain a field  named "fruit" that distinguishes, what kind of fruit the event is about. I would like to sum up, how many different fruits have been in the events of the last 15 minut... See more...
Hi, My events contain a field  named "fruit" that distinguishes, what kind of fruit the event is about. I would like to sum up, how many different fruits have been in the events of the last 15 minutes. Imagine those are the field values of 5 different events:   1. Banana, 2. Apple, 3.Banana, 4. Banana, 5. Strawberry   My aim is now to display the Number "3 " in a Panel, because the events contained 3 different kinds of fruits. Can anybody help me, how I could do that?
Hi, can somebody explain, why I dont get any results? index=... | eval Timestamp=strftime(_time,"%d-%m-%Y %H:%M:%S") | eval CurrentTime=strftime(now(),"%d-%m-%Y %H:%M:%S") | eval NotUsedFor=(Curre... See more...
Hi, can somebody explain, why I dont get any results? index=... | eval Timestamp=strftime(_time,"%d-%m-%Y %H:%M:%S") | eval CurrentTime=strftime(now(),"%d-%m-%Y %H:%M:%S") | eval NotUsedFor=(CurrentTime-Timestamp) | chart max(NotUsedFor) by host I want a chart that shows the difference of the CurrentTime and the Timestamp  
Hello, I want to create a real-time alert. I call the rest interface: https://<host>:<mPort>/services/saved/searches , and the parameter is:   is_ visible=1&cron_ Schedule = * * * * * & Description... See more...
Hello, I want to create a real-time alert. I call the rest interface: https://<host>:<mPort>/services/saved/searches , and the parameter is:   is_ visible=1&cron_ Schedule = * * * * * & Description = real time data 25 & alert_ comparator=greater than& alert.digest_ mode=0& action.webhook.param .url= www.ceshi:8099/splunk/webhook/alert& dispatch.earliest_ time=rt-60s&alert_ threshold=30&realtime_ schedule=true&alert_ type=number of events&search=ip=192.168.21.222& alert.expires=15d&name=417218432270925848&output_ mode=json& dispatch.latest_ time=rt-0s&disabled=0&is_ scheduled=true&actions=webhook However, the error display is returned: 400 bad request: [{"messages": [{"type": "error", "text": "per result alert throttling require at least one throttling field, use * to throttle on all fields"}]}], Is there a problem with the parameter I passed? Or is there an error in the SPL statement?
Hello, in many linux versions the comman netstat is now deprecated. Now you have the problem to use the sourcetype netstat within the Linux/Unix Addon in Splunk. Is there a possibility to use anothe... See more...
Hello, in many linux versions the comman netstat is now deprecated. Now you have the problem to use the sourcetype netstat within the Linux/Unix Addon in Splunk. Is there a possibility to use another command, e.g. ss instead of netstat in future as sourcetype? Many thanks in advance.
Hi, I'm looking to enlist the largest files per Linux host, i.e. if I have 6 hosts, all running on Linux let's assume, I would want to know the largest file for every six hosts, the largest file's si... See more...
Hi, I'm looking to enlist the largest files per Linux host, i.e. if I have 6 hosts, all running on Linux let's assume, I would want to know the largest file for every six hosts, the largest file's sizes as well as the name/path of that largest file. Regards, Hisham