All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I believe the tokens must be defined on the MC.  You should be able to do that by copying inputs.conf from a Search Head to the MC and restarting the MC.
Wow!  I've encountered the same.  Thanks for posting.
I have a panel (A) that uses a token (usr.name) for input. The token is set by another panel (B) when the user clicks on a user name. In its current state the viewer gets no results in panel A when t... See more...
I have a panel (A) that uses a token (usr.name) for input. The token is set by another panel (B) when the user clicks on a user name. In its current state the viewer gets no results in panel A when the dashboard loads and only displays results when the viewer clicks on a username from panel B. I am attempting to initialize the dashboard with a set of default results based on the currently logged in user. If I pull up the dashboard I would like the results in panel A to default to my username. The problem I am running into is that when I set the initial value it accepts it as plain text instead of the token value.   <init> <set token="usr.name">$env:user$</set> </init> <row> <html> $usr.name$ </html> </row>   This code displays the $usr.name$ token as "$env:user$" instead of my username. My eventual goal is to have panel A display results for myself when I first access the dashboard and for another user if I click on their name in the results of panel B. I have been coming up empty on google so am reaching out here to see if anyone has any ideas.
Hi, please can you help me as well? Hi @Dallastek1  Which app did you install in Splunk Cloud for the integration? Did you use a HF as well?   I tried to configure more than one "API Key" and UR... See more...
Hi, please can you help me as well? Hi @Dallastek1  Which app did you install in Splunk Cloud for the integration? Did you use a HF as well?   I tried to configure more than one "API Key" and URL but just don't succed.   Can you explain the steps you take? Regards.
uncheck this.
So then it seems like the answer is: No, it is not possible to create a recursive macro. Then I don't understand why there is a max recursion limit. That limit seems useless since it's not actually ... See more...
So then it seems like the answer is: No, it is not possible to create a recursive macro. Then I don't understand why there is a max recursion limit. That limit seems useless since it's not actually possible to use recursion. It should just return an error the moment recursion is detected.
So I tried couple of search strings and I am able to see my new hosts index=_internal source=*metrics.log* tcpin_connections | stats count by sourceIp and  index=_internal source=*metrics.log* tcp... See more...
So I tried couple of search strings and I am able to see my new hosts index=_internal source=*metrics.log* tcpin_connections | stats count by sourceIp and  index=_internal source=*metrics.log* tcpin_connections | stats count by hostname   Also, tried this long search sting (see below) and all the hosts are showing up.  But when I try index=<index-name>, don't see them.  index="_internal" sourcetype="splunkd" source="*metrics.lo*" group=tcpin_connections component=Metrics | eval sourceHost=if(isnull(hostname), sourceHost,hostname) | eval connectionType=case(fwdType=="uf","universal forwarder", fwdType=="lwf", "lightweight forwarder",fwdType=="full", "heavy forwarder", connectionType=="cooked" or connectionType=="cookedSSL","Splunk forwarder", connectionType=="raw" or connectionType=="rawSSL","legacy forwarder") | eval version=if(isnull(version),"pre 4.2",version) | eval guid=if(isnull(guid),sourceHost,guid) | eval os=if(isnull(os),"n/a",os)| eval arch=if(isnull(arch),"n/a",arch) | fields connectionType sourceIp sourceHost splunk_server version os arch kb guid ssl tcp_KBps | eval lastReceived = case(kb>0, _time) | eval lastConnected=max(_time) | stats first(sourceIp) as sourceIp first(connectionType) as connectionType max(version) as version first(os) as os first(arch) as arch max(lastConnected) as lastConnected max(lastReceived) as lastReceived sparkline(avg(tcp_KBps)) as "KB/s" avg(tcp_KBps) as "Avg_KB/s" by sourceHost guid ssl | addinfo | eval status=if(lastConnected<(info_max_time-900),"missing",if(mystatus="quiet","quiet","active")) | fields sourceHost sourceIp version connectionType os arch lastConnected lastReceived KB/s Avg_KB/s status ssl | rename sourceHost as Forwarder version as "Splunk Version" connectionType as "Forwarder Type" os as "Platform" status as "Current Status" lastConnected as "Last Connected" lastReceived as "Last Data Received" | fieldformat "Last Connected"=strftime('Last Connected', "%D %H:%M:%S %p") | fieldformat "Last Data Received"=strftime('Last Data Received', "%D %H:%M:%S %p") | sort Forwarder
@krutika_ag  Maybe I don't entirely understand your scenario. Is there only one syslog server, or multiple ones? The syslog server, if it is properly configured does not just create duplicate entr... See more...
@krutika_ag  Maybe I don't entirely understand your scenario. Is there only one syslog server, or multiple ones? The syslog server, if it is properly configured does not just create duplicate entries. Check your syslog configuration both on the server and the sending nodes. As far as ensuring that the ingestion is unique, add a CRC salt and/or ensure there is a stanza in your inputs.conf that is ignoring older files. There is a relevant discussion here: How to avoid reindexing files after setting crcSal... - Splunk Community inputs.conf - Splunk Documentation
@tuts  As with most things Splunk, the specifics will depend on the details of your environment and chosen deployment. In the question you asked, the Splunk DOCs are sufficient to give you a gener... See more...
@tuts  As with most things Splunk, the specifics will depend on the details of your environment and chosen deployment. In the question you asked, the Splunk DOCs are sufficient to give you a general idea, but without more details it is harder to give more specific advice. If you're on-prem, then this is another example: Monitor files and directories with inputs.conf - Splunk Documentation Syslog data is sent over port 514, though it can also be configured to transmit and receive on another port and it is usually UDP, which means that they data is best effort but there is no transport guarantee, and so potentially you will lose logs.  Further, depend on your network environment and configuration, you will need to account for any switches and firewalls to ensure that they traffic is being transmitted and received between the source and the syslog receiver, and then the traffic between the syslog server and Splunk (if you have not already configured Splunk to directly receive syslog traffic (and this changes depending on whether this is a linux or windows environment.) In the hint that Giuseppe gave above, RSYSLOG is a linux native solution that can be set-up to receive syslog data from remote hosts, and configured to deposit them to specific folders that can then be monitored by a UF agent on the syslog server host. 
Hello, I created a Splunk app with 4 dashboards and i configured the app navigation bar to only show those 4 dashboards.   <nav> <view name="dashboard_1"/> <view name="dashboard_2"/> <view n... See more...
Hello, I created a Splunk app with 4 dashboards and i configured the app navigation bar to only show those 4 dashboards.   <nav> <view name="dashboard_1"/> <view name="dashboard_2"/> <view name="dashboard_3"/> <view name="dashboard_4"/> </nav>   However, the user permissions for each one of them are different depending on the user's role. There are users that can only see DASHBOARD_1, others can only see DASHBOARD_2, some can see both DASHBOARD_2 and DASHBOARD_3, among a series of other combinations... My problem is with the users that can only view dashboards 2, 3 and 4, because even though they explicitly DO NOT have permissions to view DASHBOARD_1, when they enter the App, Splunk always tries to open that one. As you can imagine the result is an error page with the horse and the "Oops.".  I understand that because it is the first one in the navigation bar, Splunk assumes it's the homepage of that app. But I also expected Splunk to take the user permissions into consideration. Meaning that if a certain user only has permissions to see dashboards 3 and 4, the App navigation bar should only show those 2 options and when they open the App, it should open on DASHBOARD_3. This was working great a few months back and at some point it just stopped - I can't be precise on when exactly that happened . I managed to find a workaround by replacing all the <view> entries shown above with:    <nav> <view source="all" match="dashboard_1"/> <view source="all" match="dashboard_2"/> <view source="all" match="dashboard_3"/> <view source="all" match="dashboard_4"/> </nav>    However, a few days ago Splunk was updated from version 9.0.2 to 9.2.1 and the workaround stopped working as well.   I'm sure I'm missing something. What can I do so that Splunk obeys the dashboards user permissions in this situation and doesn't redirect the user to a dashboard they don't even have permission to view?   Thank you.
Try putting the <<FIELD>> on the right-hand side of the assignment in single quotes (since you have chosen to hide your sourcetypes, it could be that they have special characters in which the single ... See more...
Try putting the <<FIELD>> on the right-hand side of the assignment in single quotes (since you have chosen to hide your sourcetypes, it could be that they have special characters in which the single quotes will deal with) | timechart span=1d count as event_count by sourcetype usenull=f | foreach A B C D E F [| eval <<FIELD>>=coalesce('<<FIELD>>',0) | eval <<FIELD>>=if('<<FIELD>>'==0,"No events found",'<<FIELD>>')]
You are welcome. I would try checking based on what is written here: https://docs.splunk.com/Documentation/Splunk/latest/Data/TroubleshootHTTPEventCollector In particular: 1- Check if HEC token i... See more...
You are welcome. I would try checking based on what is written here: https://docs.splunk.com/Documentation/Splunk/latest/Data/TroubleshootHTTPEventCollector In particular: 1- Check if HEC token is enabled (I guess so :-)) 2- Verify if ACK is enabled 3- Look at the log file directly in the machine $SPLUNK_HOME/var/log/introspection/splunk/http_event_collector_metrics.log 4- Run a more general query index="_introspection" token 5- Enable logs in DEBUG
Hello, Anyone has experience configuring Splunk DBconnect with informix database?  Do we need to install the drivers explicitly for this to be configured? if yes, anyone has the link to it where i c... See more...
Hello, Anyone has experience configuring Splunk DBconnect with informix database?  Do we need to install the drivers explicitly for this to be configured? if yes, anyone has the link to it where i can download these drivers?I am using linux environment.   Thanks in advance.
For all the Apps if the document is present then it should be there in one of the tab in Splunkbase page itself.
Hi @edoardo_vicendo , thanks for the reply. Yeah, no issue with the sending of data. Like you, we managed to crack it. But the HEC that receives the data is also receiving from an appliance and a A... See more...
Hi @edoardo_vicendo , thanks for the reply. Yeah, no issue with the sending of data. Like you, we managed to crack it. But the HEC that receives the data is also receiving from an appliance and a AWS Firehose, on two other input tokens. Using the Splunk search I sent, I'm able to see metrics for connections, bytes ingested and parsing errors for those two other tokens, but NONE from the token used by the UF using S2S over HTTP.
Hello @nunoaragao , unfortunately I don't have access anymore to the Splunk UF to perform a check. Never had access to the third party Splunk where we were sending the data. By the way I didn't rea... See more...
Hello @nunoaragao , unfortunately I don't have access anymore to the Splunk UF to perform a check. Never had access to the third party Splunk where we were sending the data. By the way I didn't really get which is the issue you are facing. Please remember that in outputs.conf you don't have to explicit the HEC endpoint (/services/collector/s2s) but just the URI (https://yourdomain.com) uri=https://yourdomain.com/services/collector/s2s
If you only have one count to display, another potentially useful visualization is to shift all days into one 24-hour period.  Here is a demonstration for 9am - 5pm:   | tstats count where index=_i... See more...
If you only have one count to display, another potentially useful visualization is to shift all days into one 24-hour period.  Here is a demonstration for 9am - 5pm:   | tstats count where index=_internal earliest=-30d latest=+0d@d by _time span=1h | eval day = relative_time(_time, "-0d@d") | where relative_time(_time, "-8h@h") > day AND relative_time(_time, "-18h@h") < day | timechart span=1h sum(count) | timewrap 1day    
You needs are probably better served by INDEXED_EXTRACTIONS=csv (index time extraction) or KV_MODE=csv (search time) in sourcetype.  Using regex to handle structured data like CSV is very fragile.
I found the solution. In the end, it boiled down to a stupid mistake in a defined macro. My search really looked like this: `my_search(param1, param2)` | `fixedrange` ...which expanded to the ... See more...
I found the solution. In the end, it boiled down to a stupid mistake in a defined macro. My search really looked like this: `my_search(param1, param2)` | `fixedrange` ...which expanded to the following snippet from my original question: ```from macro "my_search":``` ... | table _time, y1, y2, y3, ..., yN ```from macro "fixedrange":``` | append [ | makeresults | addinfo | eval x=mvappend(info_min_time, info_max_time) | mvexpand x | rename x as _time | eval _t=0 | table _time, _t ]   But `my_search` was defined like this: | search index=... sourcetype=... param1=... param2=...   Note the leading pipe, which shouldn't have been there! Now, the search optimization produced different results, depending on whether the 2nd macro was applied or not. Case A (fast): `my_search(param1, param2)` ... produced: | search (sourcetype=... param1=... param2=...) | search index=... | ... Case B (slow): `my_search(param1, param2)` | `fixedrange` ... produced: | search | search (index=... sourcetype=... param1=... param2=...) | ...   ... and obviously the first search term in case B was causing the headache, although the final result set was identical in both cases. Ouch!
This thread is five years old with an accepted answer.  So your problem has better chances of being seen by someone who can help, please post a new question with details about the problem, including ... See more...
This thread is five years old with an accepted answer.  So your problem has better chances of being seen by someone who can help, please post a new question with details about the problem, including what steps you take and what errors are seen.