All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

For all the Apps if the document is present then it should be there in one of the tab in Splunkbase page itself.
Hi @edoardo_vicendo , thanks for the reply. Yeah, no issue with the sending of data. Like you, we managed to crack it. But the HEC that receives the data is also receiving from an appliance and a A... See more...
Hi @edoardo_vicendo , thanks for the reply. Yeah, no issue with the sending of data. Like you, we managed to crack it. But the HEC that receives the data is also receiving from an appliance and a AWS Firehose, on two other input tokens. Using the Splunk search I sent, I'm able to see metrics for connections, bytes ingested and parsing errors for those two other tokens, but NONE from the token used by the UF using S2S over HTTP.
Hello @nunoaragao , unfortunately I don't have access anymore to the Splunk UF to perform a check. Never had access to the third party Splunk where we were sending the data. By the way I didn't rea... See more...
Hello @nunoaragao , unfortunately I don't have access anymore to the Splunk UF to perform a check. Never had access to the third party Splunk where we were sending the data. By the way I didn't really get which is the issue you are facing. Please remember that in outputs.conf you don't have to explicit the HEC endpoint (/services/collector/s2s) but just the URI (https://yourdomain.com) uri=https://yourdomain.com/services/collector/s2s
If you only have one count to display, another potentially useful visualization is to shift all days into one 24-hour period.  Here is a demonstration for 9am - 5pm:   | tstats count where index=_i... See more...
If you only have one count to display, another potentially useful visualization is to shift all days into one 24-hour period.  Here is a demonstration for 9am - 5pm:   | tstats count where index=_internal earliest=-30d latest=+0d@d by _time span=1h | eval day = relative_time(_time, "-0d@d") | where relative_time(_time, "-8h@h") > day AND relative_time(_time, "-18h@h") < day | timechart span=1h sum(count) | timewrap 1day    
You needs are probably better served by INDEXED_EXTRACTIONS=csv (index time extraction) or KV_MODE=csv (search time) in sourcetype.  Using regex to handle structured data like CSV is very fragile.
I found the solution. In the end, it boiled down to a stupid mistake in a defined macro. My search really looked like this: `my_search(param1, param2)` | `fixedrange` ...which expanded to the ... See more...
I found the solution. In the end, it boiled down to a stupid mistake in a defined macro. My search really looked like this: `my_search(param1, param2)` | `fixedrange` ...which expanded to the following snippet from my original question: ```from macro "my_search":``` ... | table _time, y1, y2, y3, ..., yN ```from macro "fixedrange":``` | append [ | makeresults | addinfo | eval x=mvappend(info_min_time, info_max_time) | mvexpand x | rename x as _time | eval _t=0 | table _time, _t ]   But `my_search` was defined like this: | search index=... sourcetype=... param1=... param2=...   Note the leading pipe, which shouldn't have been there! Now, the search optimization produced different results, depending on whether the 2nd macro was applied or not. Case A (fast): `my_search(param1, param2)` ... produced: | search (sourcetype=... param1=... param2=...) | search index=... | ... Case B (slow): `my_search(param1, param2)` | `fixedrange` ... produced: | search | search (index=... sourcetype=... param1=... param2=...) | ...   ... and obviously the first search term in case B was causing the headache, although the final result set was identical in both cases. Ouch!
This thread is five years old with an accepted answer.  So your problem has better chances of being seen by someone who can help, please post a new question with details about the problem, including ... See more...
This thread is five years old with an accepted answer.  So your problem has better chances of being seen by someone who can help, please post a new question with details about the problem, including what steps you take and what errors are seen.
We get data in using HEC tokens, and the data is flowing just fine. But when we try to view the HTTP Event Collector panel under Indexing > Inputs, it says we have no tokens configured. How do we con... See more...
We get data in using HEC tokens, and the data is flowing just fine. But when we try to view the HTTP Event Collector panel under Indexing > Inputs, it says we have no tokens configured. How do we configure the MC to see the existing tokens?
Thank you! @VatsalJagani  Probably that's i didn't see documentation on installation process.
Hi @edoardo_vicendo , Do you still have your working setup? Do you find that introspection logs from the HEC receiver instances report metrics for tokens used by "/services/collector/raw" and "/ser... See more...
Hi @edoardo_vicendo , Do you still have your working setup? Do you find that introspection logs from the HEC receiver instances report metrics for tokens used by "/services/collector/raw" and "/services/collector/event", but not "/services/collector/s2s" ? index="_introspection" component=HttpEventCollector data.token_name=*  
@tungpx  Just assign the $click.value2$ to your token on drilldown. Here is a run anywhere example <dashboard version="1.1" theme="light"> <label>Table DrillDown</label> <row> <panel> ... See more...
@tungpx  Just assign the $click.value2$ to your token on drilldown. Here is a run anywhere example <dashboard version="1.1" theme="light"> <label>Table DrillDown</label> <row> <panel> <table> <search> <query>|makeresults | eval user="A B C" | makemv user | mvexpand user | streamstats count | eval ID1=user.count | eval ID2=user.count.0 | fields - _time</query> <earliest>-24h@h</earliest> <latest>now</latest> </search> <option name="drilldown">cell</option> <drilldown> <set token="token_value">$click.value2$</set> </drilldown> </table> </panel> </row> <row> <panel> <html> <h2> index=myindex <b> <font color="red"> ID=$token_value$ </font> </b></h2> </html> </panel> </row> </dashboard>
Hi @TNTMongoose , ask to Splunk Support, it's the only way. Ciao. Giuseppe
Good Morning; I am requesting a link to download a previous version of Splunk Forwarder.  Requesting Windows x64 7.2.6 I'm trying to repair the installation but it is requesting the MSI to complet... See more...
Good Morning; I am requesting a link to download a previous version of Splunk Forwarder.  Requesting Windows x64 7.2.6 I'm trying to repair the installation but it is requesting the MSI to complete. Thank you
Hello, I would like to merge 2 index clusters. Context 2 indexer clusters 1 search head cluster Objectives Add new indexers to cluster B. Move data from cluster A to cluster B. Remove clus... See more...
Hello, I would like to merge 2 index clusters. Context 2 indexer clusters 1 search head cluster Objectives Add new indexers to cluster B. Move data from cluster A to cluster B. Remove cluster A. Constraint Keep service interruptions to a minimum. What do you think of this process: Before starting Make sure the clusters have same Splunk version. Make sure the clusters have same configuration. Make sure volumes B can absorb indexes A. Make sur common indexes have the same configuration. If not, define their final configuration. Add new peer nodes Install new peer nodes. Add new peer nodes in cluster B. Rebalance data. Add new peer nodes in outputs.conf and restart. Move data Remove peer nodes A from outputs.conf and restart. Move indexes configuration from A to B. Copy peer apps from A to B. Put peer nodes A in manual detention to stop replication from other peer nodes. Add peer nodes A in cluster B. Remove peers node A One indexer at a time: Remove peer node A from cluster B. Wait for all the fixup tasks to complete to get the cluster meet search and factors. Rebalance data. Finally Make sure there is no major issue in the logs. Update diagram and inventory files (spreadsheets, inventory files, lookups, etc.). Update dashboards and reports if necessary.
I can't seem to find anything in known issues that matches your problem on the version that your team is using. https://docs.splunk.com/Documentation/ES/6.6.2/RN/KnownIssues Restricting access ... See more...
I can't seem to find anything in known issues that matches your problem on the version that your team is using. https://docs.splunk.com/Documentation/ES/6.6.2/RN/KnownIssues Restricting access to indexes should not affect the capability to make changes to the IR dashboard (unless the notable index has been restricted too). The one recommendation I have is to ask the Splunk admin to see if they are restricting the capability to edit notables in the custom roles that may have been developed for restricting access to indexes. Also, if possible if you could share the error that you encounter while editing the notables, it will help us to help you to find a solution.
Thank you @ITWhisperer.  This seems to be working however it is not displaying the "No events found" where there are 0 or blank events. Attached snapshot below.  Also, can you please explain the quer... See more...
Thank you @ITWhisperer.  This seems to be working however it is not displaying the "No events found" where there are 0 or blank events. Attached snapshot below.  Also, can you please explain the query.  
FWIW, the appendcols command rarely factors into a solution.  The conditions for it to work correctly are to narrow. The general form to solve a problem like this is to search the index for field va... See more...
FWIW, the appendcols command rarely factors into a solution.  The conditions for it to work correctly are to narrow. The general form to solve a problem like this is to search the index for field values that are not in the lookup table. index=prod_syslogfarm NOT [ | inputlookup myinventory.csv IP_Address | fields IP_Address | format ] | stats count by IP_Address | lookup myinventory.csv IP_Address OUTPUT Hostname Environment Tier3 Operating_System | rename Hostname as missingname | table missingname Environment Tier3 Operating_System
We need more information. What version of Splunk is the forwarder?  Is it a Universal Forwarder or Heavy Forwarder? What version of Splunk is the Deployment Server?  (FTR, the DS does not push conf... See more...
We need more information. What version of Splunk is the forwarder?  Is it a Universal Forwarder or Heavy Forwarder? What version of Splunk is the Deployment Server?  (FTR, the DS does not push configurations - forwarders pull them from the DS.) What error messages are in the forwarder's splunkd.log file?
Hi @richgalloway  In splunk search head, I installed o365 app. But when I restart Splunk, the app is disappearing. can u plz help...
Can someone help me understand what I am doing wrong here?   My requirement is I have a index=prod_syslogfarm which will report on the devices forwarding logs to the syslog collectors.  The devices... See more...
Can someone help me understand what I am doing wrong here?   My requirement is I have a index=prod_syslogfarm which will report on the devices forwarding logs to the syslog collectors.  The devices may report with either hostname / IP address / fqdn.  Now, I have to compare this with our master asset inventory (which is the lookup below myinventory.csv) and create a report with the host names that are not seen in prod_syslogfarm index.  I am making hostname as common field for the main search and the lookup file and below is my query.    Below query is not working as the report contains the hostnames that are there in the syslogfarm index. index=prod_syslogfarm | stats count by IP_Address | lookup myinventory.csv IP_Address OUTPUT Hostname | table IP_Address Hostname | rename Hostname as Reporting_Host | appendcols [ search index=prod_syslogfarm | eval fqdn_hostname=lower(fqdn_hostname) | eval Reporting_Host=lower(Reporting_Host) | eval Reporting_Host=mvappend(Reporting_Host, fqdn_hostname) ] | dedup Reporting_Host | table Reporting_Host | rename Reporting_Host as Hostname | appendcols [inputlookup myinventory.csv | eval Hostname=lower(Hostname) | stats values(Hostname) as cmdb_hostname by Hostname ] | eval missingname = mvmap(cmdb_hostname, if(cmdb_hostname != Hostname, cmdb_hostname, null())) | table missingname | mvexpand missingname | lookup myinventory.csv Hostname as missingname OUTPUT Environment Tier3 Operating_System | table missingname Environment Tier3 Operating_System