All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello folks, I trying to use a base search within a dashboard but it consistently returns no results. However, when I click Open in Search the results appear as expected. Any of you fine people have... See more...
Hello folks, I trying to use a base search within a dashboard but it consistently returns no results. However, when I click Open in Search the results appear as expected. Any of you fine people have any suggestions? <dashboard version="1.1" theme="dark"> <search id="recycle"> <query> index=o365_sharepoint AND (Operation=FileRecycled OR Operation=FolderRecycled OR Operation=FileVersionsAllDeleted) </query> <earliest>-7d@h</earliest> <latest>now</latest> </search> <label>Test Dashboard</label> <row> <panel> <title>Abnormal File Deletion and Recycle Patterns</title> <table> <search base="recycle"> <query> | stats count as "Object Deletions" BY UserId | search "Object Deletions" &gt; 50 | sort - "Object Deletions" </query> </search> <option name="drilldown">cell</option> </table> </panel> </row> </dashboard>  
The Splunk App for Salesforce, is it compatible with Splunk Cloud?
Hi All, We have requirement to onboard the Infrastructure metrics (CPU, Memory and Disk ) monitored using Appdyanics to Splunk. Both Splunk and Appdyanmics are on cloud.  I see SPLUNKtechnology add... See more...
Hi All, We have requirement to onboard the Infrastructure metrics (CPU, Memory and Disk ) monitored using Appdyanics to Splunk. Both Splunk and Appdyanmics are on cloud.  I see SPLUNKtechnology addon for Appdyanmics  in splunkbase but is not supported for Splunk cloud version. is ther anyother way we can onboard such metrics to SPlunk   Thanks  Happy Splunking
Without a tiered storage model it seems like there would be little argument for using cold/frozen storage. Except potentially if additional compression helps save space. If not, using only a homePath... See more...
Without a tiered storage model it seems like there would be little argument for using cold/frozen storage. Except potentially if additional compression helps save space. If not, using only a homePath in indexes.conf would seem like it would make all data readily accesible as hot/warm. However, checking the documentation there seems to be three paths for indexes that are reqired for Splunkd to start being homePath, coldPatch and thawedPath. (indexes.conf - Splunk Documentation) So using a single disk/volume/mount, what does the inputs.conf look like? Should the same path just be set for all three? Making sure that maxVolumeDataSizeMB adds up to the total volume available on /data/splunk/warm. [volume:storage] path = /data/splunk/warm/ # adjust when correct disk is mounted maxVolumeDataSizeMB = 2800000 ... ... [volume:_splunk_summaries] path = /data/splunk/warm/ # ~ 200GB maxVolumeDataSizeMB = 200000 ... ... [main] homePath = volume:storage/defaultdb/db coldPath = volume:storage/defaultdb/colddb thawedPath = $SPLUNK_DB/defaultdb/thaweddb [history] homePath = volume:storage/historydb/db coldPath = volume:storage/historydb/colddb thawedPath = $SPLUNK_DB/historydb/thaweddb [summary] homePath = volume:storage/summarydb/db coldPath = volume:storage/summarydb/colddb thawedPath = $SPLUNK_DB/summarydb/thaweddb ... ... [windows] homePath = volume:storage/windows/db coldPath = volume:storage/windows/colddb summaryHomePath = volume:storage/windows/summary thawedPath = $SPLUNK_DB/windows/thaweddb tstatsHomePath = volume:_splunk_summaries/windows/datamodel_summary frozenTimePeriodInSecs = 63072000 [linux] homePath = volume:storage/linux/db coldPath = volume:storage/linux/colddb summaryHomePath = volume:storage/linux/summary thawedPath = $SPLUNK_DB/linux/thaweddb tstatsHomePath = volume:_splunk_summaries/linux/datamodel_summary frozenTimePeriodInSecs = 63072000   I'm assuming this would work, right? Though as it seems that Splunk requires, and does make use of "cold" and "thawed" anyway, does it make more sence to just partition mounts for warm and cold separately anyway?  [volume:warm] path = /data/splunk/warm/ # adjust when correct disk is mounted maxVolumeDataSizeMB = 500000 [volume:cold] path = /data/splunk/warm/ # adjust when correct disk is mounted maxVolumeDataSizeMB = 2500000 ... ... [volume:_splunk_summaries] path = /data/splunk/warm/ # ~ 200GB maxVolumeDataSizeMB = 200000 ... ... [main] homePath = volume:warm/defaultdb/db coldPath = volume:cold/defaultdb/colddb thawedPath = $SPLUNK_DB/defaultdb/thaweddb [history] homePath = volume:warm/historydb/db coldPath = volume:cold/historydb/colddb thawedPath = $SPLUNK_DB/historydb/thaweddb [summary] homePath = volume:warm/summarydb/db coldPath = volume:cold/summarydb/colddb thawedPath = $SPLUNK_DB/summarydb/thaweddb ... ... [windows] homePath = volume:warm/windows/db coldPath = volume:cold/windows/colddb summaryHomePath = volume:warm/windows/summary thawedPath = $SPLUNK_DB/windows/thaweddb tstatsHomePath = volume:_splunk_summaries/windows/datamodel_summary frozenTimePeriodInSecs = 63072000 [linux] homePath = volume:warm/linux/db coldPath = volume:warm/linux/colddb summaryHomePath = volume:warm/linux/summary thawedPath = $SPLUNK_DB/linux/thaweddb tstatsHomePath = volume:_splunk_summaries/linux/datamodel_summary frozenTimePeriodInSecs = 63072000 Does it matter and what would be "best praxis". 
Hello, I am currently working on configuring Splunk Enterprise Security app, I already have data flowing into Splunk Enterprise, but I'm not sure how to properly configure the data inputs for the ap... See more...
Hello, I am currently working on configuring Splunk Enterprise Security app, I already have data flowing into Splunk Enterprise, but I'm not sure how to properly configure the data inputs for the app. Could anyone guide me on how to configure the data sources in Enterprise Security app ? If there is any specific documentation on this, I would appreciate it if you could provide it.
Hi Splunkers,  I would like to display a count divided by several locations on a map. On the map, I would like only the outline of the country to be highlighted. Is this possible with the cluster m... See more...
Hi Splunkers,  I would like to display a count divided by several locations on a map. On the map, I would like only the outline of the country to be highlighted. Is this possible with the cluster map view?    
Hello all, So I'm very new to Splunk, like I've been playing around with it for less than 3 months.  I have been tasked with sending logs from Logstash into Splunk, however, I have no idea where to ... See more...
Hello all, So I'm very new to Splunk, like I've been playing around with it for less than 3 months.  I have been tasked with sending logs from Logstash into Splunk, however, I have no idea where to start with this.  I've been looking online, but the information I find is very confusing. Does someone have some kind of guide that explains how to get data from Logstash to Splunk in detail, including what files need to be configured in Logstash? Any help would be appreciated. Thanks
Hello guys, I have a dashboard  with two tabs. I've added a dropdown input and I'm going to add more inputs. But I want to display input only for a specific tab. In my case, I want for example t... See more...
Hello guys, I have a dashboard  with two tabs. I've added a dropdown input and I'm going to add more inputs. But I want to display input only for a specific tab. In my case, I want for example the dropdown input to be displayed only when Inventory tab is active.  The dropdown input should disappear when I click Sales tab. Can anyone help me how to achieve this ? Thanks a lot  
Hi Experts, Is there any way i can add "Hostname,Node Name,Tier name" in healthrule names  ? Tested with some placeholders didn't worked. Appreciate your suggestions. Eg :  Thanks, Raj... See more...
Hi Experts, Is there any way i can add "Hostname,Node Name,Tier name" in healthrule names  ? Tested with some placeholders didn't worked. Appreciate your suggestions. Eg :  Thanks, Raj AppDynamics 
Hello,  is it possible to restrict Splunk roles by source IP? example: Splunk role: my_user_role, allowed source IPs 172.16.0.0/16 Splunk role: my_admin_role, allowed source IPs 192.168.1.5, 192.... See more...
Hello,  is it possible to restrict Splunk roles by source IP? example: Splunk role: my_user_role, allowed source IPs 172.16.0.0/16 Splunk role: my_admin_role, allowed source IPs 192.168.1.5, 192.168.1.6 Kind Regards Andre
Hi All, I need to automate the execution of specific queries in Splunk Enterprise on a weekly basis, export the results as CSV files, and upload them to a designated SharePoint Online folder for vis... See more...
Hi All, I need to automate the execution of specific queries in Splunk Enterprise on a weekly basis, export the results as CSV files, and upload them to a designated SharePoint Online folder for visualization purposes. Based on your experience, what are the available options, and which one would you recommend as the best?   Thanks, John
I need to run a small JavaScript file (main.js) across multiple websites. These websites may or may not have Splunk RUM already running on them. Can I package the Splunk RUM SDK into my main is, so... See more...
I need to run a small JavaScript file (main.js) across multiple websites. These websites may or may not have Splunk RUM already running on them. Can I package the Splunk RUM SDK into my main is, so it only collects data/stats from my main.js only, rather than collecting it from the entire HTML page? For example I don't want it to collect all the API calls or JavaScript errors from the whole page, just my small bit.   Thanks
Hi everyone i have a dataset | makeresults | eval APP1="appdelta", hostname1= mvappend("syzhost.domain1","abchost.domain1","egfhost.domain1"),hostname2=mvappend("syzhost.domain1","abchost.domain1"... See more...
Hi everyone i have a dataset | makeresults | eval APP1="appdelta", hostname1= mvappend("syzhost.domain1","abchost.domain1","egfhost.domain1"),hostname2=mvappend("syzhost.domain1","abchost.domain1") | fields - _time i want the final output to be like below  APP1 hostname1 hostnames2 appdelta syzhost.domain1 syzhost.domain1 appdelta abchost.domain1 abchost.domain1 appdelta egfhost.domain1     any suggestions 
  Hi Team, I am reaching out to discuss a requirement we have regarding the monitoring of an application deployed on MS Dynamics 365. Specifically, we need to monitor two servers: one is a front-e... See more...
  Hi Team, I am reaching out to discuss a requirement we have regarding the monitoring of an application deployed on MS Dynamics 365. Specifically, we need to monitor two servers: one is a front-end server running on IIS, and the other is a back-end server with some application services running, but without any specific technology stack. Could you please help me understand whether monitoring Dynamics 365 via AppDynamics is supported? Additionally, I would appreciate any guidance on how to initiate the monitoring process using AppDynamics SaaS. Regards, Vinodh
i have a list of hostnames being generated from left join for different application in multivalue table column APP1 hostname1 hostnames2 appdelta syzhost.domain1 abchost.domain1 egfhost.... See more...
i have a list of hostnames being generated from left join for different application in multivalue table column APP1 hostname1 hostnames2 appdelta syzhost.domain1 abchost.domain1 egfhost.domain1 syzhost.domain1 abchost.domain1       what i need is a column with  just egfhostdomain1 in a separete column just showing the diff of the list
I have a configuration where I have an intermediate forward that is forwarding logs to central indexer that I do not control.  In order to send logs to the indexer they MUST be tagged with a Project_... See more...
I have a configuration where I have an intermediate forward that is forwarding logs to central indexer that I do not control.  In order to send logs to the indexer they MUST be tagged with a Project_ID and GUIDe in the _META tag otherwise the logs will get rejected.  I would like to set up a multi-tenant like configuration where the intermediate forwarder is used by several projects that have different Project_ID and GUIDe.  The universal forward configuration that I am sending to the clients have the tagging and will be sending their unique project and guide numbers.  The problem that I am having is that the intermediate forwarder (windows) itself is not tagging its own logs when it sends to the indexers.  The clients tag and send their logs just fine.   If I apply the same tagging as the clients on the on the intermediate forwarder it will tag the logs twice or overwrite the tags. What I want to do is tag only the intermediate forward logs or any logs that are not already tagged. This is what I tried, and it is not working.  I was attempting to use regex to only add the tags to logs that are not already tagged using this filter ^(?.*Project_ID::) but this is not working.  Any help is appreciated. Intermediate forwarder: etc/apps/projtransforms/local/props.conf [default] TRANSFORMS-projectid = addprojectid TRANSFORMS-IntermediateForwarder = addIntermediateForwarder TRANSFORMS-GUIDe = addGUIDe etc/apps/projtransforms/transforms.conf [addprojectid] REGEX = ^(?.*Project_ID::) FROMAT = Project_ID::123456 MV_ADD = true [addGUIDe] REGEX = ^(?.*GUIDe::) FROMAT = GUIDe::654321 MV_ADD = true [addIntermediateForwarder] REGEX = .* FORMAT = IntermediateForwarder::XXXXXX MV_ADD = false UF Client tagging /etc/system/local/inputs.conf [default] _meta = GUIDe::654321 Project_ID::123456 disabled = 0 [WinEventLog] _meta = GUIDe::654321 Project_ID::123456 disabled = 0 [perfmon] _meta = GUIDe::654321 Project_ID::123456 disabled = 0 index = spl_win  
I recently updated Splunk to the latest version. When I did this our Universal Forwarders and Heavy Forwarders stop showing up under forwarder management. It is showing under monitoring console and i... See more...
I recently updated Splunk to the latest version. When I did this our Universal Forwarders and Heavy Forwarders stop showing up under forwarder management. It is showing under monitoring console and is showing data is flowing amongst the servers. I created edit deploymentclients.conf file to use FQDN , IP followed by port 8089 as well but nothing is working to allow forwarders or heavy forwarders to show up.
We have clustered Deployment Servers (with NFS shared drive) because we have total of clients tens of thousands at the final situation and we have deployed UF to Workstations and we have workstaion s... See more...
We have clustered Deployment Servers (with NFS shared drive) because we have total of clients tens of thousands at the final situation and we have deployed UF to Workstations and we have workstaion serverclass and few apps on it, including base_app which includes deploymentclient.conf, outputs.conf, server.conf and certificates. And when UF Agent is installed to Worstations trought SCCM it phoneshome and then it just tells Serverclass=workstations is uninstalling app=C:\ProgramFiles\SplunkUniversalForwarder\etc\apps\base_uf There is crossServerChecksum tried with true or false and no changes. We can't figure out it from any logs or so, there is nothing errors it just tells that it started to uninstall app and then restarts UF and loses connections. If we check one unique client it belongs only to one Serverclass, and Worstations Serverclass include our base_app and then Splunk_TA_windows and sysmon apps. We have version 9.4.1 on our Enterprise and UF's have 9.3.2, phonehomes coming trough F5 LoadBalancer. We are running out of ideas with this. 
Hi, I have a query that goes something like this: index=myindex  | eval urgency="medium", account_name='awsMetadata.account.name' | stats count values(account_name) as account_name, values(ur... See more...
Hi, I have a query that goes something like this: index=myindex  | eval urgency="medium", account_name='awsMetadata.account.name' | stats count values(account_name) as account_name, values(urgency) as urgency I also have a CSV file which has the following columns and values env, project_name prod,prod_account11 dev,dev_account3 prod,prod_account55 qa,qa_account43 I wish to compare each of the results in the query above using "account_name" field with CSV file field "project_name", and if those two values match for each result, I wish to create a new field "env" in my results based on the "env" field from CSV file.  eg. if query result "prod_account55" from account_name field is found in CSV file "prod_account55" from project_name field, extract prod value from env field as a new field in the results. 
Hello Co-Splunkers, Greetings, I had a point to fix in splunk visualisation (cluster map). As i am plotting the values based on LAT and LON values as a result its fetching good, but is there an... See more...
Hello Co-Splunkers, Greetings, I had a point to fix in splunk visualisation (cluster map). As i am plotting the values based on LAT and LON values as a result its fetching good, but is there any scope to plot the continuous line instead of bubble marks . I need to make that bubble points to connecting line , like we see in google maps how its connect from start point to the end point. Please take a look of snaps attached. Thanks  in advance for your responses and  for time spent on my Question.  The above image is my data  Like the above snap i need to cover .