All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hello yes UF is already setup on secondary environment. On first environment we use _TCP_ROUTING as we also have two Splunk platforms...
@livehybrid  I ran the  | rest /services/cluster/manager/health But while CM was down at that time also am getting value 1 but it should show 0 . As I need to create a Alert for this but am no... See more...
@livehybrid  I ran the  | rest /services/cluster/manager/health But while CM was down at that time also am getting value 1 but it should show 0 . As I need to create a Alert for this but am not getting the correct output 
Still that is easier task than manage those search Filters I have managed more than 400 in one environment and when you are using Splunk Volumes for those it's not actually too big issue.
Actually sorry, ignore me. I understood why.  I understood it all so all good thanks a lot again
Thanks a lot. I removed <done> strftime... It's working fine. Only one panel works at a time, the other one says 'waiting for an input'  To understand this better and why the same is not working in ... See more...
Thanks a lot. I removed <done> strftime... It's working fine. Only one panel works at a time, the other one says 'waiting for an input'  To understand this better and why the same is not working in this code, I failed to determine the difference.  This code below is running all 3 queries in the panels parallel <input type="dropdown" token="spliterror_1" searchWhenChanged="true"> <label>Splits</label> <choice value="*">All</choice> <choice value="false">Exclude</choice> <choice value="true">Splits Only</choice> <prefix>isSplit="</prefix> <suffix>"</suffix> <default>$spliterror_1$</default> <change> <condition label="All"> <set token="ShowAll">*</set> <unset token="ShowTrue"></unset> <unset token="ShowFalse"></unset> </condition> <condition label="Exclude"> <unset token="ShowAll"></unset> <set token="ShowFalse">false</set> <unset token="ShowTrue"></unset> </condition> <condition label="Splits Only"> <unset token="ShowAll"></unset> <unset token="ShowFalse"></unset> <set token="ShowTrue">true</set> </condition> </change> </input> <table depends="$ShowAll$"> <title>% Ratio on selected (Sorted by Failed)</title> <search> <query>my query </search> <option name="drilldown">none</option> <option name="refresh.display">progressbar</option> </table> <table depends="$ShowTrue$"> <title> % Ratio on selected (Sorted by Failed)</title> <search> <query>my search</query> <earliest>$timeerror_1.earliest$</earliest> <latest>$timeerror_1.latest$</latest> </search> <option name="drilldown">none</option> <option name="refresh.display">progressbar</option> </table> <table depends="$ShowFalse$"> <title> % Ratio on selected (Sorted by Failed)</title> <search> <query>my search</query> <earliest>$timeerror_1.earliest$</earliest> <latest>$timeerror_1.latest$</latest> </search> <option name="drilldown">none</option> <option name="refresh.display">progressbar</option> </table> </panel>  
I am also facing the same issue, I have followed your steps with adding in the layout section and definition but still no luck. Following for a solution
Sorry kind of fell off there, but just wanted to update in case others see this.  Basically the problem is for the "fully populated" case.  For fully populated data, why not use this? index=exam... See more...
Sorry kind of fell off there, but just wanted to update in case others see this.  Basically the problem is for the "fully populated" case.  For fully populated data, why not use this? index=example | stats avg(field1) perc95(field2) by x,y,z a,b,c I may not have been very clear here, but basically this would not work because what I'm looking for is:  avg(field1) perc95(field2) x y z a b c f1g1   10 20 30         f2g2       1 2 3 f1g3   40 50 60         f2g4       4 5 6   Here we have agg stats for four groups, g1to g4. For example g1 represents the stats for the grouping x=10, y=20, z=30, a=*, b=*, c=*, and g4 represents the stats for the group of transactions with x=*, y=*, z=*, a=4, b=5, c=6.  Just a stats doesn't help us here  because of overlap, for instance g1 contains events of g2 (g1 contains events with a=1,b=2,c=3 and g2 contains events with x=10,y=20,z=30)
Lots of good comments from people who know here, so just to add my thoughts specifically regarding your use of a lookup as a search constraint Subsearches with large search constraints coming from a... See more...
Lots of good comments from people who know here, so just to add my thoughts specifically regarding your use of a lookup as a search constraint Subsearches with large search constraints coming from a lookup are less efficient than using the lookup as a lookup, i.e. | rest /services/authentication/users splunk_server=local | search type=SAML | fields title | rename title AS User | lookup 12k_line.csv User OUTPUT Found | where isnotnull(Found) ... The general issue is that a subsearch will first run and in your case will return the SPL phrase (User=1 OR User=2 OR ... User=5000 OR ... User=12000) and THEN it will add that to your SPL that gets executed, so that huge block of expanded SPL will have to be parsed whereas the lookup is likely to be far more efficient. You can see what the subsearch will expand to by running this search | inputlookup 12k_line.csv | fields User | format
Improving the DM acceleration searches can be tricky, as others have pointed out, so can you identify other non DMA searches that are at the top of the list. Searches that just take a long time are n... See more...
Improving the DM acceleration searches can be tricky, as others have pointed out, so can you identify other non DMA searches that are at the top of the list. Searches that just take a long time are not necessarily the bad searches, they may just be handling large datasets. Poor performing searches can come from badly written dashboard searches that use joins or other poor techniques. They can also come from bad saved searches, again due to bad search techniques. It's often these user written searches that can bring Splunk to its knees. Of course it's also possible that you just don't have enough grunt - what licence model is your Splunk Cloud using, SVCs or ingest?
See @chrisyounger number viz app https://splunkbase.splunk.com/app/4537 It can do what you want  
Hi @splunkreal  Are you able to set the index in the inputs.conf on the UF on in your secondary environment? If not then you will need to use props/transforms as described - However this configurat... See more...
Hi @splunkreal  Are you able to set the index in the inputs.conf on the UF on in your secondary environment? If not then you will need to use props/transforms as described - However this configuration will not work by default on a UF as this parsing is done on a HF/Indexer. I presume this is currently applied to the UF, otherwise it would also change the configuration for your primary environment?   Did this answer help you? If so, please consider: Adding kudos to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing.
Hello, we have Windows servers from two environments, we want WinEventLog source (Windows Events logs) to go in "windows" index from main environment and secondary environment to go to "sec_windows"... See more...
Hello, we have Windows servers from two environments, we want WinEventLog source (Windows Events logs) to go in "windows" index from main environment and secondary environment to go to "sec_windows". On UF from secondary environment we have setup inputs.conf with index = sec_windows but this doesn't work : all goes to windows index, could you help ? Thank you very much.   props.conf [source::WinEventLog:*] TRANSFORMS-set_index_sec_windows = set_index_sec_windows TRANSFORMS-set_index_windows_wineventlog = set_index_windows_wineventlog transforms.conf # Windows [set_index_windows_wineventlog] SOURCE_KEY = MetaData:Source REGEX = WinEventLog DEST_KEY = _MetaData:Index FORMAT = windows [set_index_sec_windows] SOURCE_KEY = _MetaData:Index REGEX = sec_windows DEST_KEY = _MetaData:Index FORMAT = sec_windows  
@richgalloway but as I told we have nearly 200 config IDs and need to create 200 indexes which is very difficult to maintain? That is the concern here..
Here https://splunkbase.splunk.com/app/6368 is one cool splunk app to use both SCP and onprem. Basically it’s btool with some additions, I strongly recommended it. 
The only sure way to control access to data is by index.  Have a separate index for each set of access rules. IOW, sources "123456" and "456789" should be in separate indexes and only roles that nee... See more...
The only sure way to control access to data is by index.  Have a separate index for each set of access rules. IOW, sources "123456" and "456789" should be in separate indexes and only roles that need access to the Source should have access to corresponding index.
I think your whitelist setting should be correctly formatted; try using whitelist = 4624,4625 to ensure proper filtering and, confirm whether renderXml=false is appropriate, as XML-based logs may req... See more...
I think your whitelist setting should be correctly formatted; try using whitelist = 4624,4625 to ensure proper filtering and, confirm whether renderXml=false is appropriate, as XML-based logs may require renderXml=true for accurate extraction. Next, check if Windows is generating these events by running this command in PowerShell. Get-WinEvent -LogName Security | Where-Object { $_.Id -eq 4624 -or $_.Id -eq 4625 } | Select-Object -First 10  If no events appear, ensure that Windows auditing policies are correctly configured by navigating to gpedit.msc → Advanced Audit Policy Configuration → Audit Policies → Logon/Logoff → Audit Logon, and verifying that success and failure logging is enabled. You can also confirm this by running auditpol /get /subcategory:"Logon" in PowerShell. If you see errors like - It could indicate a misconfiguration in inputs.conf ERROR ExecProcessor - message from "WinEventLog" The parameter is incorrect. And, perform a Splunk search to confirm if any relevant events have been indexed by running : index=* sourcetype=Security:AD_Sec_entmon EventCode=4624 OR EventCode=4625. If no results appear, try searching with index=* EventCode=4624 OR EventCode=4625 OR check index metadata with | metadata type=sourcetypes index=wineventlog. If data is still missing, it’s worth testing with the default Splunk sourcetype by modifying inputs.conf to use sourcetype=WinEventLog:Security instead. [WinEventLog://Security] index = wineventlog sourcetype=WinEventLog:Security disabled = 0 start_from = oldest current_only = 1 evt_resolve_ad_obj = 1 checkpointInterval = 300 whitelist = 4624,4625 After making any configuration changes, restart the Splunk Universal Forwarder using splunk restart or Restart-Service SplunkForwarder on Windows. 
 
Hi @livehybrid  Thanks for replay, Yes I did change the app.conf, the only change which I did was update version number of app and yes, previous version of app is not asking for any restart this is... See more...
Hi @livehybrid  Thanks for replay, Yes I did change the app.conf, the only change which I did was update version number of app and yes, previous version of app is not asking for any restart this is my app.conf # this add-on is powered by splunk Add-on builder [install] state_change_requires_restart = false is_configured = 0 state = enabled build = 1 [launcher] author = My_company version = 1.8.3 description = This add-on allows integration for Splunk. [ui] is_visible = 1 label = Add-on for Splunk docs_section_override = AddOns:released [package] id = TA-add-on-for-splunk [triggers] reload.addon_builder = simple reload.ta_entrust_datacard_intellitrust_add_on_for_splunk_account = simple reload.ta_entrust_datacard_intellitrust_add_on_for_splunk_settings = simple reload.passwords = simple
Don't know If I am correct but - As your business day starts at 5 PM (D) and ends at 5 PM (D+1), you need to adjust _time accordingly and you should extract the latest execution time for Job1, Job2, ... See more...
Don't know If I am correct but - As your business day starts at 5 PM (D) and ends at 5 PM (D+1), you need to adjust _time accordingly and you should extract the latest execution time for Job1, Job2, and Job3 within this custom day window. Based on the current time, determine whether a job is PLANNED or EXECUTED. | makeresults count=1 | eval now=relative_time(now(), "@d+17h") # Adjusting the custom day window (5 PM as the start of the day) | append [search index=your_index sourcetype=your_sourcetype earliest=-1d@d+17h latest=@d+17h | eval job_status=if(_time <= now, strftime(_time, "Executed at %H:%M"), "PLANNED") | stats latest(_time) as job_time latest(job_status) as job_status by job_name ] | eval job_status=if(isnull(job_status), "PLANNED", job_status) | table job_name job_status   If _time is stored in epoch format, no need to convert it. Adjust the @d+17h if your business day has different start hours. Note: Please use the above query using your own index and sourcetype name.  
I see. I'd really like option one to work here, and it did work for a user that was using Splunk on-prem: [abuseipdb_control_coll] enforceTypes = true field._key = string field.value = string r... See more...
I see. I'd really like option one to work here, and it did work for a user that was using Splunk on-prem: [abuseipdb_control_coll] enforceTypes = true field._key = string field.value = string replicate = true   But this same configuration was apparently not working for a user on Splunk Cloud Victoria, whatever the reason (see the previous example). I was unsure whether it was a problem with eventual consistency, or what exactly.