All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

You need to give clearer description.  For example, does the phrase "throw the result" mean to discard event when src_ip is found in file.csv or to only preserve matching events in order to raise an ... See more...
You need to give clearer description.  For example, does the phrase "throw the result" mean to discard event when src_ip is found in file.csv or to only preserve matching events in order to raise an alert?  Have you read command document for lookup? Here I will give an example assuming that your goal is actually to preserve matching events, and assuming that file.csv contains a single column malicious_ip. | lookup file.csv src_ip as malicious_ip output malicious_ip as matching | where isnotnull(matching)  
The problem is Splunk always flattens arrays.  The trick is to preserve logs{} as a vector before mvexpand. index="factory_mtp_events" | spath path=logs{} ``` alternative syntax: | spath logs{} ``` ... See more...
The problem is Splunk always flattens arrays.  The trick is to preserve logs{} as a vector before mvexpand. index="factory_mtp_events" | spath path=logs{} ``` alternative syntax: | spath logs{} ``` | mvexpand logs{} | search test_name="Sample Test1"
I have a single search head and configured the props.conf to have DATETIME_CONFIG = CURRENT as I want the data to be indexed at the time Splunk receives the report. I restarted splunk after every cha... See more...
I have a single search head and configured the props.conf to have DATETIME_CONFIG = CURRENT as I want the data to be indexed at the time Splunk receives the report. I restarted splunk after every change. Previously I had it set to a field in the report. When I upload a csv and use the correct sourcetype it assigns the current time to the report. When I upload a report via curl through the HEC endpoint it indexes it to the right time. Same thing when I run it through a simple script. But when the test pipeline runs, it indexes data to the timestamp that is in the report even though it is using the same sourcetype as the other tests I did. Is it possible to add a time field that overrides the sourcetype config? Is there a way to see the actual api request in the splunk internal logs?
I have a drilldown into another dashboard with parameters earliest=$earliest$ and latest=$latest$, this works. When I go into the drilldown dashboard directly it sets the data to come back as "all ti... See more...
I have a drilldown into another dashboard with parameters earliest=$earliest$ and latest=$latest$, this works. When I go into the drilldown dashboard directly it sets the data to come back as "all time".  Is there a way that I can have multiple defaults or some other constrain that doesn't cause this? Here's what I've been working on but it's not working. Any feedback would be helpful... <input type="time" token="t_time"> <default> <earliest>if(isnull($url.earliest$), "-15m@m", $url.earliest$)</earliest> <latest>if(isnull($url.latest$), "now", $url.latest$)</latest> </default> </input>
good day. I am somewhat new to splunk, I am trying to generate a cross between some malicious IP s I have in a file. csv and I want to compare them with src_ip field and if there are coincidences I ... See more...
good day. I am somewhat new to splunk, I am trying to generate a cross between some malicious IP s I have in a file. csv and I want to compare them with src_ip field and if there are coincidences I throw the result, I understand that you have to generate a lookup but I can not move any further
No, I don't believe so. 
You could try setting initial / default values for the filters Other than that, you may have to share the source of your dashboard for us to be able to suggest changes.
You can use UF (or HF) for directly monitoring network input (tcp or udp port). Be aware however of shotcomings of such solution. 1. You can only define one sourcetype for a given input so if you wa... See more...
You can use UF (or HF) for directly monitoring network input (tcp or udp port). Be aware however of shotcomings of such solution. 1. You can only define one sourcetype for a given input so if you want to listen for data from several different sources yoh have to either create multiple inputs or do some complicated index-time rewriting and rerouting. Not easy to maintain. 2. There used to be some performance problems compared to a specialized syslog daemon 3. You lose network-level metadata. So if you can live witn that, you can define a tcp or udp input and live with that. But it's not a recommended solution.
It's a stanza defining a type of input (powershell in this case) and a unique name for it. In some cases (for example the monitor inputs) input names also have functional meaning (monitor input name ... See more...
It's a stanza defining a type of input (powershell in this case) and a unique name for it. In some cases (for example the monitor inputs) input names also have functional meaning (monitor input name points to monitored files or directories on disk).
I'm a bit lost here. If you're doing vmotion between datastores why bothering with logical storage operations within vms?
Hi JP, Thank you for getting back to my message. I have gone through the documentation to enable HEC and generate the token. I also enabled SSL. This is part i am getting confused <protocol>://<hos... See more...
Hi JP, Thank you for getting back to my message. I have gone through the documentation to enable HEC and generate the token. I also enabled SSL. This is part i am getting confused <protocol>://<host>:<port>/<endpoint> in the documentation.  And i have it set as this - https://computer-name:8088/services/collector/event Is that correct?
Hello I have  a dashboard with few panels All the panels have drilldown tokens that updates a table also, i have few filters that supposed to update the table also but i see a message " Search is ... See more...
Hello I have  a dashboard with few panels All the panels have drilldown tokens that updates a table also, i have few filters that supposed to update the table also but i see a message " Search is waiting for input..." when selecting options from the filters.  The tables updated only when clicking on the other panels. how can i update the table from the filters ?   Thanks
Did you see any change in the data being ingested when you made the TRUNCATE value change?  Also, if you change it to something specific to test it, like 10237.  Does that limit it to 10237 bytes?  T... See more...
Did you see any change in the data being ingested when you made the TRUNCATE value change?  Also, if you change it to something specific to test it, like 10237.  Does that limit it to 10237 bytes?  This is mainly just to see if this particular TRUNCATE setting is what is limiting your data, and maybe we can help rule it out as the culprit so we know to dig further.
Based on your description it sounds like you are configuring the Blue Prism side of things to point at the HTTP Event Collector (HEC).  To get the URL and token, you'll need to make sure the HEC conf... See more...
Based on your description it sounds like you are configuring the Blue Prism side of things to point at the HTTP Event Collector (HEC).  To get the URL and token, you'll need to make sure the HEC configuration has been completed on the Splunk side so Blue Prism has a place to send its data. Here's the documentation that pertains to configuration of the HEC component in Splunk: https://docs.splunk.com/Documentation/Splunk/latest/Data/UsetheHTTPEventCollector 
We are planning for a migration of hot drives to faster disk for all indexers. The current plan is below, and I wonder if it makes sense, because the plan applies maintenance mode for each indexer, a... See more...
We are planning for a migration of hot drives to faster disk for all indexers. The current plan is below, and I wonder if it makes sense, because the plan applies maintenance mode for each indexer, and I grew up with using maintenance mode for the entire operation. Any thoughts? The current plan -   Event monitoring Suspend monitoring of alert on the indexers (Repeat following for each indexer, one at a time) Splunk Ops: Put Splunk Cluster in Maintenance Mode Stop Splunk service on One Indexer VM Ops vMotion the existing 2.5TB disk to any Unity datastores Provision new 2.5TB VM disk from the VSAN datastore Linux Ops Rename the existing hot data logical volume "/opt/splunk/hot-data" to "/opt/splunk/hot-data-old" Create a new volume group and mount the new 2.5TB disk as "/opt/splunk/hot-data" Splunk Ops Restart Splunk service on indexer Take Indexer Cluster out of Maintenance Mode Review Cluster Master to confirm indexer is processing and rebalancing has started as expected Wait a few minutes to allow for Splunk to rebalance across all indexers (Return to top and repeat steps for next indexer) Splunk Ops: Validate service and perform test searches Check CM Panel - -> Resources/usage/machine (bottom panel - IOWait Times) and monitor changes in IOWait Event monitoring Enable monitoring of alert on the indexers     In addition, Splunk PS suggested to use -   splunk offline --enforce-counts     Not sure if it's the right way since it might need to migrate the ~40TB of cold data, and would slow the entire operation
@Gregory.Burkhead I was trying to find this answer as well, and came across your post. I found this article on docs, that may help.  https://docs.appdynamics.com/appd/23.x/latest/en/analytics/config... See more...
@Gregory.Burkhead I was trying to find this answer as well, and came across your post. I found this article on docs, that may help.  https://docs.appdynamics.com/appd/23.x/latest/en/analytics/configure-analytics/collect-transaction-analytics-data What concerns me is that you mentioned turning off "Enable Analytics for New Applications." That sliding button should turn that off for new apps and all BTs except All Other Traffic. If that is the situation, there appears to be a bug, which would require support. 
This may be a noob question but what is the first line of the inputs.conf entry defining?  Does everything within the brackets, [powershell://CertStore-LocalUser], provide function or is this a comme... See more...
This may be a noob question but what is the first line of the inputs.conf entry defining?  Does everything within the brackets, [powershell://CertStore-LocalUser], provide function or is this a commented area?
Data is written to SmartStore (S2) as soon as it rolls to warm.  On a test system, it's typical for hot buckets to not roll to warm until the indexers restart.  On a production system, however, that ... See more...
Data is written to SmartStore (S2) as soon as it rolls to warm.  On a test system, it's typical for hot buckets to not roll to warm until the indexers restart.  On a production system, however, that should happen at least once a day.  Hot buckets are never written to S2.  There is no setting to give you instant replication to S2.  In an indexer cluster, hot buckets are replicated to other indexers almost immediately.
I have been testing out SmartStore in a test environment. I can not find the setting to control how quickly data ingested into splunk can be replicated to my S3 bucket. What I want is for any data in... See more...
I have been testing out SmartStore in a test environment. I can not find the setting to control how quickly data ingested into splunk can be replicated to my S3 bucket. What I want is for any data ingested to be replicated to my s3 bucket as quickly as possible, I am looking for the closest to 0 minutes of data loss. Data only seems to replicate when the Splunk server is restarted. I have tested this by setting up another splunk server with the same s3 bucket as my original, and it seems to have only picked up older data when searching.    max_cache_size   only controls the size of the local cache which I'm not after   hotlist_recency_secs   controls how long before hot data could be deleted from cache, not how long before it is replicated to s3   frozenTimePeriodInSecs, maxGlobalDataSizeMB, maxGlobalRawDataSizeMB   controls freezing behavior which is not what I'm looking for. What setting do I need to configure? Am I missing something within conf files in Splunk or permissions to set in AWS for S3?  Thank you for the help in advance!
Hi @Fredrik.Kervall, I wanted to share this AppD Docs page: https://docs.appdynamics.com/appd/22.x/latest/en/application-monitoring/tiers-and-nodes/monitor-iis And the forum search results for "I... See more...
Hi @Fredrik.Kervall, I wanted to share this AppD Docs page: https://docs.appdynamics.com/appd/22.x/latest/en/application-monitoring/tiers-and-nodes/monitor-iis And the forum search results for "IIS" if you see any other existing content that could be helpful. https://community.appdynamics.com/t5/forums/searchpage/tab/message?filter=location&q=%22IIS%22&noSynonym=false&inactive=false&advanced=true&location=category:Discussions&collapse_discussion=true&search_type=thread