All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

@arifsaha - There is a scenario where you have having LDAP setup on Splunk which means you have a Splunk password the same as the AD password so you don't want to expose the AD password anywhere, you... See more...
@arifsaha - There is a scenario where you have having LDAP setup on Splunk which means you have a Splunk password the same as the AD password so you don't want to expose the AD password anywhere, you would rather share the token. This is just one example, but the basic idea is you are giving access but not the password. Plus you can time-bound the token.   I hope this helps!!!
If i use this in the sub query - earliest=$tr_14AGuxUA.earliest$ latest=$tr_14AGuxUA.latest$, then getting this error    Invalid value "2023-10-16T14:00:00.000Z" for time term 'earliest'
Try something like this (note that the rollover colour changes are disabled by this): <dashboard version="1.1" theme="light"> <label>Trellis</label> <row> <panel depends="$alwayshide$"> ... See more...
Try something like this (note that the rollover colour changes are disabled by this): <dashboard version="1.1" theme="light"> <label>Trellis</label> <row> <panel depends="$alwayshide$"> <html> <style> #trellis div.facets-container div.viz-panel:nth-child(1) g.highcharts-series path { fill: red !important; } #trellis div.facets-container div.viz-panel:nth-child(2) g.highcharts-series path { fill: green !important; } #trellis div.facets-container div.viz-panel:nth-child(3) g.highcharts-series path { fill: blue !important; } #trellis div.facets-container div.viz-panel:nth-child(4) g.highcharts-series path { fill: yellow !important; } </style> </html> </panel> <panel> <chart id="trellis"> <search> <query>| makeresults count=100 | eval _time=relative_time(_time,"@h")-(random()%(5*60*60)) | eval Category="Category ".mvindex(split("ABCD",""),random()%4) | eval Value=random()%100 | timechart span=1h avg(Value) as AvgValue_Secs by Category</query> <earliest>-5h@h</earliest> <latest>@h</latest> </search> <option name="charting.axisTitleX.visibility">collapsed</option> <option name="charting.axisTitleY.visibility">collapsed</option> <option name="charting.axisTitleY2.visibility">collapsed</option> <option name="charting.chart">column</option> <option name="charting.drilldown">none</option> <option name="charting.legend.placement">none</option> <option name="refresh.display">progressbar</option> <option name="trellis.enabled">1</option> </chart> </panel> </row> </dashboard>
Hi, To get your logs visible in Log Observer, you would configure your K8S OTel deployment to send your logs to a Splunk platform instance (Enterprise or Cloud) via HEC (http event collector). Then,... See more...
Hi, To get your logs visible in Log Observer, you would configure your K8S OTel deployment to send your logs to a Splunk platform instance (Enterprise or Cloud) via HEC (http event collector). Then, in Splunk Observability Cloud, you would configure the Log Observer Connect integration to read logs from your Splunk (Cloud or Enterprise) and display them in Log Observer. The important part to understand about this approach is that the logs are not ingested or stored in Splunk Observability Cloud--they are just displayed there after reading them from your Splunk platform instance (Enterprise or Cloud). Once they're visible to Log Observer, it can do good things like correlate the logs to metrics and traces.
I have a query to retrieve user experience metrics from Dynatrace index. Wanted to compare the response times for 2 different time frames. My query is having sub query as well. In the dashboard, i am... See more...
I have a query to retrieve user experience metrics from Dynatrace index. Wanted to compare the response times for 2 different time frames. My query is having sub query as well. In the dashboard, i am having 2 time range pickers. Main query is picking the time range from time range picker1 and in the sub query using the token from time range picker2.  <<main search>> | appendcols [ search index="dynatrace"  $tr_14AGuxUA.earliest$ - $tr_14AGuxUA.latest$ | spath |output=user_actions path="userActions{}"| stats count by user_actions this is not retrieving any data from the sub query. how to fix this? If i am passing the hard coded values - earliest=10/23/2023:10:00:00 latest=10/23/2023:11:00:00, then its working fine. 
I can't access the support portal, with URL https://www.splunk.com/404?ErrorCode=23&ErrorDescription=Invalid+contact   Does anyone have the same issue?  
I have opened the port 8088 in Windows Defender but the result is the same  Is anybody have an idea?
A couple of things: - What user are you running this command as, and what user is Splunk installed as? - Are you in a bash?  If you don't quote your credentials correctly then they won't get expand... See more...
A couple of things: - What user are you running this command as, and what user is Splunk installed as? - Are you in a bash?  If you don't quote your credentials correctly then they won't get expanded: Solved: Getting error "Could not look up HOME variable. Au... - Splunk Community  
I don't think you can monitor the same "base path" twice. An ugly hack to walk around that is to use (hard/soft) links
Hi @rphillips_splk  bar can be an environment variable ? thanks  
You should be able to, although it isn't called out in the docs for serverclass.conf directly. There are a couple of other configuration parameters you can set to get a bit of logic in the matching,... See more...
You should be able to, although it isn't called out in the docs for serverclass.conf directly. There are a couple of other configuration parameters you can set to get a bit of logic in the matching, too, if that is helpful: whitelist.where_field whitelist.where_equals blacklist.where_field blacklist.where_equals   If you think the docs are unclear and should include a multiple wildcard example, then I suggest submitting feedback via the form at the bottom of every Splunk docs page.  That team has always been responsive for improving the documentation.  
@meshorer I have led you to the water, now you need to learn how to drink it  I don't have the knowledge, nor the time,  straight away to work out how to do it the way you need to but you have the... See more...
@meshorer I have led you to the water, now you need to learn how to drink it  I don't have the knowledge, nor the time,  straight away to work out how to do it the way you need to but you have the logs, now you just need to do the engineering piece and make it work in your SIEM.  There is also REST capability that you could have a script or some other REST capability in your SIEM to grab the data from REST. Maybe consider changing the logs to JSON: Before you begin Configure Splunk SOAR (On-premises) with JSON log format by issuing the following command from the Splunk SOAR console: $phenv set_preference --logging-format json Happy SOARing! 
Hi Giuseppe I already have a lookup table created. My question is if it is possible to import that into the Splunk Lookup file editor and not create a new one from there.
The message says it all - your curl sent SYN packets but never got any reply. Which means that even if your port is open, it's probably filtered by your local firewall (since you're connecting to lo... See more...
The message says it all - your curl sent SYN packets but never got any reply. Which means that even if your port is open, it's probably filtered by your local firewall (since you're connecting to loopback device it can't be anything on external network). Check your iptables/firewalld config and open that port so that you can connect. Whether the port is open by Splunk is another question and you'll see as soon as you "poke a hole" in your firewall.
Hi @mlevsh, maybe you should try to have a different approach in indexes creation: usually different indexes are used when there are different retention periods and/or different access grants. Inde... See more...
Hi @mlevsh, maybe you should try to have a different approach in indexes creation: usually different indexes are used when there are different retention periods and/or different access grants. Indexes are siloes in which it's possible to store data, different data are differentiated by sourcetype not by index. So you could reduce the number of indexes: 280 indexes are very difficoult to manage and to use, why do you have so many indexes? In other words there isn't any sense  having one sourcetype in one index. In other words, indexes aren't database tables. the best approach is usually to limit the time that a user can use in a search and not the indexes. Ciao. Giuseppe
Hey @siraj , there should be no need to modify the Generated Search, as both the aggregate_raw_into_entity and aggregate_raw_into_service macros are intended to be part of the KPI's SPL. Are you gett... See more...
Hey @siraj , there should be no need to modify the Generated Search, as both the aggregate_raw_into_entity and aggregate_raw_into_service macros are intended to be part of the KPI's SPL. Are you getting the error when running the Generated Search in a separate Search tab? If so, what App context are you in while attempting to run the search? To troubleshoot, follow the instructions in the error message to make sure that your user account has the appropriate permission for the macro. Also, make sure that the macro is not only shared in the SA-ITOA app while you're trying to run the test search in a different App context. Both of these settings are accessed from the Permissions setting of the macro. Let me know if this helps, avd
Can anyone shed any light on an issue I am having with a Splunk Cloud deployment, I have a Splunk heavy forwarder setup on Red Hat Linux 8 ingesting Cisco Switches via syslog,  This appears to be wor... See more...
Can anyone shed any light on an issue I am having with a Splunk Cloud deployment, I have a Splunk heavy forwarder setup on Red Hat Linux 8 ingesting Cisco Switches via syslog,  This appears to be working fine for the vast majority of devices, I can see the individual directories and logs dropping into /opt/splunklogs/Cisco/, There is just one Cisco device that isn't being ingested ? I have compared the config on the switch to the others and it is setup correctly logging host/trap etc, I can telnet from the switch to the interface on the Linux server and see the syslog hitting the interface via tcpdump, I have never had to populate an allow list for the switch IP's it looks to do them automatically on the forwarder, I can see the Cisco directories in the forwarder are generated by SPLUNK. For some reason this one switch just isn't being ingested. Does anyone have any guidance on some troubleshooting steps to try and establish what the issue is ? Thanks
hi My Splunk server is reachable from : http://127.0.0.1:8000/fr-FR/app/launcher/home I try to send data in my splunk server with the curl command below curl -H "Authorization: Splunk 1f5de11f-ee... See more...
hi My Splunk server is reachable from : http://127.0.0.1:8000/fr-FR/app/launcher/home I try to send data in my splunk server with the curl command below curl -H "Authorization: Splunk 1f5de11f-ee8e-48df-b4f1-eb1bbb6f3db0" https://localhost:8088/services/collector/event -d '{"event":"hello world"}'  But I have the message : curl: (7) Failed to connect to localhost port 8088 after 2629 ms: Couldn't connect to server  Could you help please?
I have data that has multiple columns that contain timings for particular tasks on particular dates.  I want to hide all but the last column when in a line chart.  The sticking point is I want the li... See more...
I have data that has multiple columns that contain timings for particular tasks on particular dates.  I want to hide all but the last column when in a line chart.  The sticking point is I want the line chart to still show the x-axis labels "process" names from the previous data collected, it just wouldn't connect the lines until that task is complete.  This will allow the chart to show progression.  I believe found the CSS method for doing this, but I'm not sure how to accomplish this in dashboard studio code. Example: Process 08/24/2023 10:15:45 09/24/2023 11:15:44 10/24/2023 10:45:00 Task1 2.44 1.44 8.55 Task2 1.44 18.44 8.43 Task3 8.22 4.24   Task4 4.44 8.12     The idea would be that the line chart would only show the last column in the list above, but still show all the process tasks on the x-axis.  The example I created in paint below shows the X axis has the labels still, but the lines haven't been connected yet since those haven't completed yet.
Hello, is it possible to have mydirectory\*.log monitor stanza to route data to usual indexers (or any specific monitor stanza) AND another specific mydirectory\file.log to another specific _TCP_ROU... See more...
Hello, is it possible to have mydirectory\*.log monitor stanza to route data to usual indexers (or any specific monitor stanza) AND another specific mydirectory\file.log to another specific _TCP_ROUTING ? Thanks.