All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @AleCanzo ,   Can you try this below   a:focus {                  outline: none !important;                  box-shadow: none !important; }
@livehybrid idea with an on-top row for the full URL is pretty close to what I wanted to achieve. As for filtering or searching by the full URL, I can still do it using something like: | search _fu... See more...
@livehybrid idea with an on-top row for the full URL is pretty close to what I wanted to achieve. As for filtering or searching by the full URL, I can still do it using something like: | search _full_url="*$token_for_search$*"  
I just tested this approach and think that, at least for now, it suits my goal.
Hi guys, I'm searching for a way to disable the outline of the links in splunk classic dashboard. There was a similar question on the community, but i'm not understanding the answers. In my css i'm... See more...
Hi guys, I'm searching for a way to disable the outline of the links in splunk classic dashboard. There was a similar question on the community, but i'm not understanding the answers. In my css i'm trying with: a:focus{ outline: none !important} but it doesn't work. Thank you!
Hi @Keigo,   You’re using Splunk Universal Forwarder with the Linux Add-on on a 2 vCPU / 4GB RAM VM. A script (hardware.sh) that runs lshw causes 20–40% CPU spikes, which may impact performance. l... See more...
Hi @Keigo,   You’re using Splunk Universal Forwarder with the Linux Add-on on a 2 vCPU / 4GB RAM VM. A script (hardware.sh) that runs lshw causes 20–40% CPU spikes, which may impact performance. lshw is not light weight and is overkill most of the use cases, and thhis behavior is expected because lshw is resource-heavy, especially on low-spec machines. Below are the recommendations 1. Check if you actually need hardware data.  2. If needed, reduce the frequency to minimize impact 3. Alternative: Run the script via cron during off-peak hours and monitor its output file with Splunk. 4. Use lightweight tools like CollectD for performance metrics instead of heavy scripts. 5. Recommended specs (if you keep such scripts):     Use 4 vCPUs and 6–8 GB RAM for better performance.
The addon for *nix contains several inputs. Some of them are more useful, some less... The question is why would you run this input in the first place. Is this your only source of HW inventory? And ... See more...
The addon for *nix contains several inputs. Some of them are more useful, some less... The question is why would you run this input in the first place. Is this your only source of HW inventory? And even then - this is something that doesn't change often so the interval between subsequent runs can be quite big without any significant impact to the usefulness of the output data.
what access was given to service account for the connection to happen ?
The old Splunk dashboard examples app https://classic.splunkbase.splunk.com/app/1603/ which although no longer supported, can be downloaded and you can get an idea of how to write some extensions t... See more...
The old Splunk dashboard examples app https://classic.splunkbase.splunk.com/app/1603/ which although no longer supported, can be downloaded and you can get an idea of how to write some extensions that would, for example, give you a tooltip on hover over the URL, depending on your level of css/javascript skills.  
Trying to extract some data from a hybrid log where the log format is <Syslog header> <JSON Data>. Have had success with extracting via spath and regex in search but want to do this before ingestion... See more...
Trying to extract some data from a hybrid log where the log format is <Syslog header> <JSON Data>. Have had success with extracting via spath and regex in search but want to do this before ingestions, so trying to complete this on a heavy forwarder by using  props.conf and transforms.conf to complete the field extractions. Got this working to a degree but it only functions partly fuctions with some logs the the nested logs in msg are not full extracted and some logs don't extract anything for JSON. An example of one of many log types but all in this format <Syslog header> <JSON Data> Aug 3 04:45:01 server.name.local program {"_program":{"uid":"0","type":"newData","subj":"unconfined","pid":"4864","msg":"ab=new:session_create creator=sam,sam,echo,ba_permit,ba_umask,ba_limits acct=\"su\" exe=\"/usr/sbin/vi\" hostname=? addr=? terminal=vi res=success","auid":"0","UID":"user1","AUID":"user1"}} creator=sam stopping at first comma acct=\ exe=\ Doesn't collect the data after \ And the following 2 logs had no field extractions from the json Aug 3 04:31:01 server.name.local program {"_program":{"uid":"0","type":"SYSCALL","tty":"pts1","syscall":"725","su":"0","passedsuccess":"yes","pass":"unconfined","id":"0","sess":"3417","pid":"4568732","msg":"utime(1754195461.112:457):","items":"2","gid":"0","fsuid":"0","fsgid":"0","exit":"3","exe":"/usr/bin/vi","euid":"0","egid":"0","comm":"vi","auid":"345742342","arch":"c000003e","a3":"1b6","a2":"241","a1":"615295291b60","a0":"ffffff9c","UID":"user1","SYSCALL":"openmat","SUID":"user1","SGID":"user1","GID":"user1","FSUID":"user1","FSGID":"user1","EUID":"user1","EGID":"user1","AUID":"user1","ARCH":"x86_64"}} Aug 3 04:10:01 server.name.local program {"_program":{"type":"data","data":"/usr/bin/vi","msg":"utime(1754194201.112:457):"}}   Thanks in advance for any help
I am using the Splunk  Add-on for Microsoft Windows. The inputs.conf files on the hosts are located in: C:\SplunkUF\etc\apps\Splunk_TA_windows\local\inputs.conf
I am getting records from 5 or more .log s .
Hi @Shakeer_Spl  Are you able to see the data land in *any* index? (e.g main?) If so, can you confirm the sourcetype matches that configured in inputs.conf? I assume (but want to cheeck) that the i... See more...
Hi @Shakeer_Spl  Are you able to see the data land in *any* index? (e.g main?) If so, can you confirm the sourcetype matches that configured in inputs.conf? I assume (but want to cheeck) that the indexes have been created on the Indexers, and that you have appropriate RBAC/access to view the contents? Are you able to see the UF sending logs to _internal on your indexers? If not this would indicate that the issue lies with for output (from UF) or input (into IDX) Are there any other props/transforms that apply to that sourcetype in your props.conf? Sorry for all the questions (in addition to those already asked re HF etc) , there is a lot of establish in a situation like this!  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing  
Heres a 3rd option if its helpful? This starts off with a hidden panel, clicking on the row in the table sets a token containing the full URL, which unhides the panel and displays the full URL for t... See more...
Heres a 3rd option if its helpful? This starts off with a hidden panel, clicking on the row in the table sets a token containing the full URL, which unhides the panel and displays the full URL for the clicked row.  <dashboard version="1.1"> <label>Long URL demo (makeresults + hidden full value)</label> <row> <panel> <table> <search> <query>| makeresults count=5 | streamstats count | eval schemes=split("https,https,https,http,http", ",") | eval hosts=split("alpha.example.com,beta.example.org,gamma.example.net,delta.example.io,epsilon.example.dev", ",") | eval paths=split("shop/products/42,blog/2024/10/15/welcome,api/v1/users/12345/profile,media/images/2024/10/banner,docs/guides/install/linux", ",") | eval queries=split("ref=newsletter&amp;utm=fall,?tag=splunk&amp;src=forum,?session=abc123&amp;feature=beta,?size=large&amp;color=blue,?step=1&amp;mode=advanced", ",") | eval fragments=split("#top,#comments,#details,#preview,#faq", ",") | eval url_full=mvindex(schemes,count-1)."://".mvindex(hosts,count-1)."/".mvindex(paths,count-1).mvindex(queries,count-1).mvindex(fragments,count-1) | eval host="web-server-00".count | eval _full_url=url_full | eval url_display=if(len(url_full)&gt;60, substr(url_full,1,60)."…", url_full) | table host url_display _full_url</query> <earliest>-15m</earliest> <latest>now</latest> </search> <option name="drilldown">row</option> <option name="refresh.display">progressbar</option> <drilldown> <set token="full_url_token">$row._full_url$</set> </drilldown> </table> </panel> </row> <row depends="$full_url_token$"> <panel> <html> <h3>Full URL</h3> <p>$full_url_token$</p> </html> </panel> </row> </dashboard>  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing  
At first glance it should work. 1. Are you by any chance using INDEXED_EXTRACTIONS? 2. Is your data sent straight from UF to indexers or do you have any HF in the middle?
Route logs from combined_large.log to webapp1_index or webapp2_index based on log content ([webapp1] or [webapp2]). Setup: Universal Forwarder: Windows (sending logs) Indexer: Windows (receivi... See more...
Route logs from combined_large.log to webapp1_index or webapp2_index based on log content ([webapp1] or [webapp2]). Setup: Universal Forwarder: Windows (sending logs) Indexer: Windows (receiving & parsing) Logs contain [webapp1] or [webapp2] Expect routing to happen on the Indexer Sample log: 2025-05-03 16:41:36 [webapp1] Session timeout for user 2025-04-13 20:25:59 [webapp2] User registered successfully inputs.conf (on UF): [monitor://C:\logs\combined_large.log] disabled = false sourcetype = custom_combined_log index = default props.conf (on Indexer): [custom_combined_log] TRANSFORMS-route_app_logs = route-webapp1_index, route-webapp2_index transforms.conf (on Indexer): [route-webapp1_index] REGEX = \[webapp1\] DEST_KEY = _MetaData:Index FORMAT = webapp1_index [route-webapp2_index] REGEX = \[webapp2\] DEST_KEY = _MetaData:Index FORMAT = webapp2_index Tried: Verified file is being read Confirmed btool loads configs Restarted services Re-indexed by duplicating the file Issue: Logs not appearing in either webapp1_index or webapp2_index Questions: Is this config correct? Am I missing a key step or wrong config location? Any way to debug routing issues? Any help or insight would be greatly appreciated. Thanks in advance    
Being a client of oneself can have some strange results, especially if you deploy an app modifying DS behaviour (especially repo location). I suppose it could lead to a restart loop or some similar "... See more...
Being a client of oneself can have some strange results, especially if you deploy an app modifying DS behaviour (especially repo location). I suppose it could lead to a restart loop or some similar "funny" side effects. But even without it, you could accidentally push some general settings involuntarily modifying DS behaviour in an undesired way (even disabling it entirely).
I'm not aware of any built-in visualization component providing such functionality. In simpleXML dashboard you could probably do that with custom JS. Of course @livehybrid 's idea can shorten your d... See more...
I'm not aware of any built-in visualization component providing such functionality. In simpleXML dashboard you could probably do that with custom JS. Of course @livehybrid 's idea can shorten your data if it's over a certain limit but you're left with just a shortened version - no "click to unwrap" functionality.
Hi @danielbb  The docs state :  Important: The deployment server cannot be a deployment client of itself. If it is, the following error will appear in splunkd.log: "This DC shares a Splunk instance... See more...
Hi @danielbb  The docs state :  Important: The deployment server cannot be a deployment client of itself. If it is, the following error will appear in splunkd.log: "This DC shares a Splunk instance with its DS: unsupported configuration". This has the potential to lead to situations where the deployment clients lose their ability to contact the deployment server. https://help.splunk.com/en/splunk-enterprise/administer/update-your-deployment/9.4/configure-the-deployment-system/configure-deployment-clients#:~:text=The%20deployment%20server%20cannot%20be%20a%20deployment%20client%20of%20itself  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
It's an unsupported configuration.  It was explicitly prohibited at one time, but I can't find that documented now. Note that it is possible for a DS to be a client of another DS.  This hierarchical... See more...
It's an unsupported configuration.  It was explicitly prohibited at one time, but I can't find that documented now. Note that it is possible for a DS to be a client of another DS.  This hierarchical structure has been used to manage multiple DSs when there are too many endpoints for a single DS to handle.
Dinesh Please create a support case for us to troubleshoot further- https://mycase.cloudapps.cisco.com/