All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @mlevsh, maybe you should try to have a different approach in indexes creation: usually different indexes are used when there are different retention periods and/or different access grants. Inde... See more...
Hi @mlevsh, maybe you should try to have a different approach in indexes creation: usually different indexes are used when there are different retention periods and/or different access grants. Indexes are siloes in which it's possible to store data, different data are differentiated by sourcetype not by index. So you could reduce the number of indexes: 280 indexes are very difficoult to manage and to use, why do you have so many indexes? In other words there isn't any sense  having one sourcetype in one index. In other words, indexes aren't database tables. the best approach is usually to limit the time that a user can use in a search and not the indexes. Ciao. Giuseppe
Hey @siraj , there should be no need to modify the Generated Search, as both the aggregate_raw_into_entity and aggregate_raw_into_service macros are intended to be part of the KPI's SPL. Are you gett... See more...
Hey @siraj , there should be no need to modify the Generated Search, as both the aggregate_raw_into_entity and aggregate_raw_into_service macros are intended to be part of the KPI's SPL. Are you getting the error when running the Generated Search in a separate Search tab? If so, what App context are you in while attempting to run the search? To troubleshoot, follow the instructions in the error message to make sure that your user account has the appropriate permission for the macro. Also, make sure that the macro is not only shared in the SA-ITOA app while you're trying to run the test search in a different App context. Both of these settings are accessed from the Permissions setting of the macro. Let me know if this helps, avd
Can anyone shed any light on an issue I am having with a Splunk Cloud deployment, I have a Splunk heavy forwarder setup on Red Hat Linux 8 ingesting Cisco Switches via syslog,  This appears to be wor... See more...
Can anyone shed any light on an issue I am having with a Splunk Cloud deployment, I have a Splunk heavy forwarder setup on Red Hat Linux 8 ingesting Cisco Switches via syslog,  This appears to be working fine for the vast majority of devices, I can see the individual directories and logs dropping into /opt/splunklogs/Cisco/, There is just one Cisco device that isn't being ingested ? I have compared the config on the switch to the others and it is setup correctly logging host/trap etc, I can telnet from the switch to the interface on the Linux server and see the syslog hitting the interface via tcpdump, I have never had to populate an allow list for the switch IP's it looks to do them automatically on the forwarder, I can see the Cisco directories in the forwarder are generated by SPLUNK. For some reason this one switch just isn't being ingested. Does anyone have any guidance on some troubleshooting steps to try and establish what the issue is ? Thanks
hi My Splunk server is reachable from : http://127.0.0.1:8000/fr-FR/app/launcher/home I try to send data in my splunk server with the curl command below curl -H "Authorization: Splunk 1f5de11f-ee... See more...
hi My Splunk server is reachable from : http://127.0.0.1:8000/fr-FR/app/launcher/home I try to send data in my splunk server with the curl command below curl -H "Authorization: Splunk 1f5de11f-ee8e-48df-b4f1-eb1bbb6f3db0" https://localhost:8088/services/collector/event -d '{"event":"hello world"}'  But I have the message : curl: (7) Failed to connect to localhost port 8088 after 2629 ms: Couldn't connect to server  Could you help please?
I have data that has multiple columns that contain timings for particular tasks on particular dates.  I want to hide all but the last column when in a line chart.  The sticking point is I want the li... See more...
I have data that has multiple columns that contain timings for particular tasks on particular dates.  I want to hide all but the last column when in a line chart.  The sticking point is I want the line chart to still show the x-axis labels "process" names from the previous data collected, it just wouldn't connect the lines until that task is complete.  This will allow the chart to show progression.  I believe found the CSS method for doing this, but I'm not sure how to accomplish this in dashboard studio code. Example: Process 08/24/2023 10:15:45 09/24/2023 11:15:44 10/24/2023 10:45:00 Task1 2.44 1.44 8.55 Task2 1.44 18.44 8.43 Task3 8.22 4.24   Task4 4.44 8.12     The idea would be that the line chart would only show the last column in the list above, but still show all the process tasks on the x-axis.  The example I created in paint below shows the X axis has the labels still, but the lines haven't been connected yet since those haven't completed yet.
Hello, is it possible to have mydirectory\*.log monitor stanza to route data to usual indexers (or any specific monitor stanza) AND another specific mydirectory\file.log to another specific _TCP_ROU... See more...
Hello, is it possible to have mydirectory\*.log monitor stanza to route data to usual indexers (or any specific monitor stanza) AND another specific mydirectory\file.log to another specific _TCP_ROUTING ? Thanks.  
@niketn  I miss you, my friend. I remember this started a great bunch of conversations between us that included a hug at .conf19. I want to give a shout out to @kaeleyt for providing my go-to solu... See more...
@niketn  I miss you, my friend. I remember this started a great bunch of conversations between us that included a hug at .conf19. I want to give a shout out to @kaeleyt for providing my go-to solution for this problem: https://community.splunk.com/t5/Splunk-Search/How-to-add-colors-to-a-table-for-dynamic-columns/m-p/411419 After looking further, I found this line in the documentation, https://docs.splunk.com/Documentation/Splunk/latest/Viz/TableFormatsXML: "If you do not specify a field, the format rule is applied to the entire table. " So the magic is not specifying a field in the line:   <format type="color">   I also want to provide, like Niket taught me by example, to include a run-anywhere example implementing the solution.   <dashboard version="1.1"> <label>Erics Column Test</label> <row> <panel> <title>Data Example</title> <table> <search> <query>index=_internal sourcetype=splunkd log_level!=INFO earliest=-7m@m latest=now | eval Time=strftime(_time,"%Y-%m-%d %H:%M") | chart count as Error by component Time</query> <earliest>-1h@h</earliest> <latest>now</latest> <sampleRatio>1</sampleRatio> </search> <option name="count">100</option> <option name="dataOverlayMode">none</option> <option name="drilldown">none</option> <option name="percentagesRow">false</option> <option name="rowNumbers">false</option> <option name="totalsRow">false</option> <option name="wrap">true</option> <format type="color"> <colorPalette type="list">[#118832,#1182F3,#CBA700,#D94E17,#D41F1F]</colorPalette> <scale type="threshold">0,30,70,100</scale> </format> </table> </panel> </row> </dashboard>    
Please share your full search as the advice already given seems to fix the apparent errors in your example.
@phanTom , Anything on this?   thank you in advance
Can multiple wildcards be used in serverclass.conf whitelist file?  whitelist.from_pathname = /lookup/host.txt   Examples: M*WEB* *WBS*  
what can be the solution here as I'm creating this query dynamically with format and giving as an input to base query.  how can i escape these special charachters
No. As @bowesmana already told you - the -d "something" option sends the data you specify on the command line. If you want the data to be read from the file you have to specify it as the source for ... See more...
No. As @bowesmana already told you - the -d "something" option sends the data you specify on the command line. If you want the data to be read from the file you have to specify it as the source for the POST data with the -d @filename option. And there is no "templating" you just specify raw data to be posted. So it will not work like "get a part of the data from the command line and iterate some file's contents over it". No - if you want something like that, you have to implement it manually (bash scripting, python, PowerShell, whatever).
No. The SHC will not replicate files you manually place on one of the members of the cluster. That's what the deployer is for. You could manually place some content on each of the SHs in cluster and... See more...
No. The SHC will not replicate files you manually place on one of the members of the cluster. That's what the deployer is for. You could manually place some content on each of the SHs in cluster and that could work for some time (well, that's why you don't distribute/overwrite built-in apps from the deployer so you don't cause conflicts in case of upgrade). Also when the deployer is up you have to manually push the configs, it will not happen automatically.
Your quotes before the http appear to be two SINGLE quotes rather than a double quote. Once you fix that you get a different error about dynamic fields and it looks like it doesn't like the $ sign in... See more...
Your quotes before the http appear to be two SINGLE quotes rather than a double quote. Once you fix that you get a different error about dynamic fields and it looks like it doesn't like the $ sign in the searchmatch string.  
not sure you understood my question the curl command below create an event with "hello world" curl -H "Authorization: Splunk 12345678-1234-1234-1234-1234567890AB" https://localhost:8088/services/co... See more...
not sure you understood my question the curl command below create an event with "hello world" curl -H "Authorization: Splunk 12345678-1234-1234-1234-1234567890AB" https://localhost:8088/services/collector/event -d '{"event":"hello world"}' imagine that in my json file I have many items with a different event name for example "hello world", "hello world1", "hello world2"..... is the good curl command to apply is like this? curl -H "Authorization: Splunk 12345678-1234-1234-1234-1234567890AB" https://localhost:8088/services/collector/event -d '{"event":}'  what i mean is that if i dont mention the name of the event, 3 events will be created in splunk with "hello world", "hello world1", "hello world2"?
That's not really a Splunk or ES-related question. It's related to your data and your use-cases. If you filter out some data, you don't have it. And if you don't have events, you can't base your sear... See more...
That's not really a Splunk or ES-related question. It's related to your data and your use-cases. If you filter out some data, you don't have it. And if you don't have events, you can't base your searches (and thus use-cases) on them. As simple as that. It's more a windows-related question to your admins to help you review the use cases you want to enable.
+1 on that. Why in what system should 14.84 ever mean 14.084? That's what leading zeros are for. It's definitely an application error. Also - where do you get that value from? It's the _time field o... See more...
+1 on that. Why in what system should 14.84 ever mean 14.084? That's what leading zeros are for. It's definitely an application error. Also - where do you get that value from? It's the _time field or some other time? While the app should be fixed either way, if it's the main timestamp of the event, it's simply plain wrong in terms of it being the proper timestamp for the event.
Hey @Sak08092015 , are you able to specify which exact Apex EventType are you trying to send to Splunk? Is it one of the standard Apex EventTypes that are EventLogFile supported? (Eg. Apex REST API E... See more...
Hey @Sak08092015 , are you able to specify which exact Apex EventType are you trying to send to Splunk? Is it one of the standard Apex EventTypes that are EventLogFile supported? (Eg. Apex REST API Event Type) avd
Also remember that json does not support comments.
I don't get it. index=vulnerability_scan Risk=Critical earliest=-7d latest=now | stats values(CVE) as CVE_7d by extracted_Host | appendcols [ search index=vulnerability_scan Risk=Critical earliest... See more...
I don't get it. index=vulnerability_scan Risk=Critical earliest=-7d latest=now | stats values(CVE) as CVE_7d by extracted_Host | appendcols [ search index=vulnerability_scan Risk=Critical earliest=now -7d latest=now | stats values(CVE) as CVE_now by extracted_Host ] I see two practically identical searches (with one having "earliest=-7d" and the other one having "now-7d" which mean the same). The only difference between them might be if some event got ingested between run of the outer search and inner one.