All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Thank you very much for your answer and help, I will try today. And I will come back with feedback  
Im trying to create a role for a developer in our organization where the developer is only allowed to view the dashboard which is created by the admin or the person who has edit_own_objects capablity... See more...
Im trying to create a role for a developer in our organization where the developer is only allowed to view the dashboard which is created by the admin or the person who has edit_own_objects capablity attached to his role.... when I created a role for developer which has the below capablities attached to its role: capabilities = [   "search",   "list_all_objects",   "rest_properties_get",   "embed_report" ] Now when I login as a developer and when I try viewing the dashboards its visible and its in read mode only but the developer can create new dashboards also which shouldnt be allowed. How can i restrict developer from creating a new dashboard? And also automatically the below capablities gets added to the role along with the ones which ive specified above: run_collect run_mcollect schedule_rtsearch edit_own_objects Ive also given read access in the specific dashboard permissions setting for the developers role only..
thanks, the definition need global permission?
Relanto@DESKTOP-FRSRLVP MINGW64 ~ $ curl -k -u admin:adminadmin https://localhost:8089/servicesNS/admin/search/data/ui/panels -d "name=user_login_panel&eai:data=<panel><label>User Login Stats</l... See more...
Relanto@DESKTOP-FRSRLVP MINGW64 ~ $ curl -k -u admin:adminadmin https://localhost:8089/servicesNS/admin/search/data/ui/panels -d "name=user_login_panel&eai:data=<panel><label>User Login Stats</label></panel>" % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 3990 100 3913 100 77 12955 254 --:--:-- --:--:-- --:--:-- 13255<?xml version="1.0" encoding="UTF-8"?> <!--This is to override browser formatting; see server.conf[httpServer] to disable. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .--> <?xml-stylesheet type="text/xml" href="/static/atom.xsl"?> <feed xmlns="http://www.w3.org/2005/Atom" xmlns:s="http://dev.splunk.com/ns/rest" xmlns:opensearch="http://a9.com/-/spec/opensearch/1.1/"> <title>panels</title> <id>https://localhost:8089/servicesNS/admin/search/data/ui/panels</id> <updated>2024-12-03T12:27:38+05:30</updated> <generator build="0b8d769cb912" version="9.3.1"/> <author> <name>Splunk</name> </author> <link href="/servicesNS/admin/search/data/ui/panels/_new" rel="create"/> <link href="/servicesNS/admin/search/data/ui/panels/_reload" rel="_reload"/> <link href="/servicesNS/admin/search/data/ui/panels/_acl" rel="_acl"/> <opensearch:totalResults>1</opensearch:totalResults> <opensearch:itemsPerPage>30</opensearch:itemsPerPage> <opensearch:startIndex>0</opensearch:startIndex> <s:messages/> <entry> <title>user_login_panel</title> <id>https://localhost:8089/servicesNS/admin/search/data/ui/panels/user_login_panel</id> <updated>2024-12-03T12:27:38+05:30</updated> <link href="/servicesNS/admin/search/data/ui/panels/user_login_panel" rel="alternate"/> <author> <name>admin</name> </author> <link href="/servicesNS/admin/search/data/ui/panels/user_login_panel" rel="list"/> <link href="/servicesNS/admin/search/data/ui/panels/user_login_panel/_reload" rel="_reload"/> <link href="/servicesNS/admin/search/data/ui/panels/user_login_panel" rel="edit"/> <link href="/servicesNS/admin/search/data/ui/panels/user_login_panel" rel="remove"/> <link href="/servicesNS/admin/search/data/ui/panels/user_login_panel/move" rel="move"/> <content type="text/xml"> <s:dict> <s:key name="disabled">0</s:key> <s:key name="eai:acl"> <s:dict> <s:key name="app">search</s:key> <s:key name="can_change_perms">1</s:key> <s:key name="can_list">1</s:key> <s:key name="can_share_app">1</s:key> <s:key name="can_share_global">1</s:key> <s:key name="can_share_user">1</s:key> <s:key name="can_write">1</s:key> <s:key name="modifiable">1</s:key> <s:key name="owner">admin</s:key> <s:key name="perms"/> <s:key name="removable">1</s:key> <s:key name="sharing">user</s:key> </s:dict> </s:key> <s:key name="eai:appName">search</s:key> <s:key name="eai:data"><![CDATA[<panel><label>User Login Stats</label></panel>]]></s:key> <s:key name="eai:digest">6ad60f5607b5d1dd50044816b18d139b</s:key> <s:key name="eai:userName">admin</s:key> <s:key name="label">User Login Stats</s:key> <s:key name="panel.title">user_login_panel</s:key> <s:key name="rootNode">panel</s:key> </s:dict> </content> </entry> </feed> Relanto@DESKTOP-FRSRLVP MINGW64 ~ $ I have created the panel using the Rest api splunk doccumentation.. https://docs.splunk.com/Documentation/Splunk/7.2.0/RESTREF/RESTknowledge?_gl=1*5lyxk4*_gcl_au*MTY2MTE2NDE1Ni4xNzI4ODI5MDM1*FPAU*MTY2MTE2NDE1Ni4xNzI4ODI5MDM1*_ga*NDU2NzA4MDU0LjE3Mjg4MjkwMzU.*_ga_5EPM2P39FV*MTczMTMxNDgwOC42OC4xLjE3MzEzMTQ4MjIuNDYuMC45MjMyNTUzMTE.*_fplc*ZDZBQlJUQXM5UjkzY3lLQTMlMkZyZjdBNnlmMUE1bzg2TEc1JTJGc1hMbWc5RUFYMjR1V2lLdDBabjJzUmlYZzJSZXp4VkhzRU8wOUg4OVJKb1JFbWtMMnloYnR4NGRzJTJGVjR3NkdyJTJGeUl5SlBLejJyMWo3RE8lMkJhT0R0a3B1cjRIdyUzRCUzRA..#data.2Fui.2Fpanels) After creating the panel its not showing in my Splunk enterprises UI. What is the actual use of this????
@PickleRick Thanks for highlighting the limitation of the amount of rows be returned by a sub search. This explains why one of my other Dashboards won't provide trustful values at the moment. Looks l... See more...
@PickleRick Thanks for highlighting the limitation of the amount of rows be returned by a sub search. This explains why one of my other Dashboards won't provide trustful values at the moment. Looks like I need to review and update some of my searches.....
Is this what you're after | makeresults format=csv data="Day,Percent 2024-11-01,100 2024-11-02,99.6 2024-11-03,94.2 2024-12-01,22.1 2024-12-02,19.0" | eval _time=strptime(Day, "%F") | foreach 50 80 ... See more...
Is this what you're after | makeresults format=csv data="Day,Percent 2024-11-01,100 2024-11-02,99.6 2024-11-03,94.2 2024-12-01,22.1 2024-12-02,19.0" | eval _time=strptime(Day, "%F") | foreach 50 80 100 [ eval REMEDIATION_<<FIELD>> = if(Percent <= <<FIELD>>, 1,null())] | stats earliest_time(_time) as Start earliest_time(REMEDIATION_*) as r_* | foreach r_* [ eval <<MATCHSTR>>%=<<FIELD>> | fields - <<FIELD>> ] | foreach * [ eval "<<FIELD>>"=strftime('<<FIELD>>', "%F") ]
You cannot use regex matching in lookups. Lookup wildcards only support * and that is when you create a lookup definition and use the advanced options to set WILDCARD(Regex_Path). You are using a loo... See more...
You cannot use regex matching in lookups. Lookup wildcards only support * and that is when you create a lookup definition and use the advanced options to set WILDCARD(Regex_Path). You are using a lookup file, not the definition. So the lookup must match exactly or when you have a * e.g. /home/ubuntu/* for a wildcarded version but then you would have to have another column with the real regex, note that c:\boot.ini is not valid regex, due to the \ which needs to be escaped.  
Hi,  from splunk, how can i check what are the logs is being forwarded out to another SIEM? output.conf is configured to forward syslog, what does the syslog containing?
I have a table that looks like this   Day Percent 2024-11-01 100 2024-11-02 99.6 2024-11-03 94.2 ... ... 2024-12-01 22.1 2024-12-02 19.0   From this table I am calc... See more...
I have a table that looks like this   Day Percent 2024-11-01 100 2024-11-02 99.6 2024-11-03 94.2 ... ... 2024-12-01 22.1 2024-12-02 19.0   From this table I am calculating three fields: REMEDIATION_50, _80, and _100 using the following   |eval REMEDIATION_50 = if(PERCENTAGE <= 50, "x", "")     From this eval statement, I am going to have multiple rows where the _50, and _80 rows are marked, and some where both fields are marked.  I'm interested in isolating the DAY of the first time each of these milestones are hit.  I've yet to craft the right combination of stats, where, and evals that gets me what I want. In the end, I'd like to get to this of sorts Start 50% 80% 100% 2024-11-01 2024-11-23 2024-12-02 -   Any help would be appreciated, thanks!
I have created a lookup table in Splunk that contains a column with various regex patterns intended to match file paths. My goal is to use this lookup table within a search query to identify events w... See more...
I have created a lookup table in Splunk that contains a column with various regex patterns intended to match file paths. My goal is to use this lookup table within a search query to identify events where the path field matches any of the regex patterns specified in the Regex_Path column. lookupfile:   Here is the challenge I'm facing: When using the match() function in my search query, it only successfully matches if the Regex_Path pattern completely matches the path field in the event. However, I expected match() to perform partial matches based on the regex pattern, which does not seem to be the case. Interestingly, if I manually replace the Regex_Path in the where match() clause with the actual regex pattern, it successfully performs the match as expected. Here is an example of my search query: index=teleport event="sftp" path!="" | eval path_lower=lower(path) | lookup Sensitive_File_Path.csv Regex_Path AS path_lower OUTPUT Regex_Path, Note | where match(path_lower, Regex_Path) | table path_lower, Regex_Path, Note I would like to understand why the match() function isn't working as anticipated when using the lookup table and whether there is a better method to achieve the desired regex matching. Any insights or suggestions on how to resolve this issue would be greatly appreciated.
"No luck", "Does not work" are useless words in this forum.  What is the input?  What is the output?  How does the output differ from your expectations?  Are you sure your data contains time periods ... See more...
"No luck", "Does not work" are useless words in this forum.  What is the input?  What is the output?  How does the output differ from your expectations?  Are you sure your data contains time periods where the condition is satisfied?  Unless you can illustrate these data points, volunteers here cannot help you. Here is an emulation for the first search.  As you can see, remaining results after "where" all have output1 > 30% of output2   index = _audit action IN (artifact_deleted, quota) | rename action as field1 | eval field1 = if(field1 == "quota", "output1", "output2") ``` the above emulates index=sample sample="value1" ``` | timechart span=10m count by field1 | where output1 > 0.3 * output2   My output is _time output1 output2 2024-12-01 21:00:00 6 0 2024-12-01 21:20:00 4 4 2024-12-01 22:00:00 2 2 2024-12-01 23:30:00 11 11 2024-12-01 23:40:00 2 4 2024-12-02 00:00:00 10 8 2024-12-02 01:00:00 6 8 2024-12-02 03:00:00 11 31 2024-12-02 03:10:00 5 6 2024-12-02 03:20:00 3 8 2024-12-02 03:30:00 3 7 2024-12-02 03:40:00 5 4 2024-12-02 03:50:00 8 13 2024-12-02 04:00:00 5 11 2024-12-02 04:10:00 14 12 2024-12-02 04:20:00 12 14 2024-12-02 04:30:00 6 13 2024-12-02 04:50:00 4 0 2024-12-02 07:10:00 2 2 2024-12-02 12:00:00 6 0 Without "where", there are 150 time intervals. Play with the emulation, modify it to see how timechart, timebucket, and filter conditions work together with different datasets.  Then, analyze your own dataset.  For example, if your search doesn't return any result when "where" applies, post output when "where" is removed. (You can anonymize actual values with "output1" "output2" like I do in the emulation but data accurate to real data.)
Hello, dear Splunk Community. I am trying to extract the ingest volume from our client's search head, but I noticed that I am getting different results depending on which method I am using. For exa... See more...
Hello, dear Splunk Community. I am trying to extract the ingest volume from our client's search head, but I noticed that I am getting different results depending on which method I am using. For example, if a run the following query: index=_internal source=*license_usage.log* type="Usage" | eval h=if(len(h)=0 OR isnull(h),"(SQUASHED)",h) | eval s=if(len(s)=0 OR isnull(s),"(SQUASHED)",s) | eval idx=if(len(idx)=0 OR isnull(idx),"(UNKNOWN)",idx) | eval GB=round(b/1024/1024/1024, 3) | timechart sum(GB) as Volume span=1d     I get the following table: _time Volume 2024-11-25 240.489 2024-11-26 727.444 2024-11-27 751.526 2024-11-28 777.469 2024-11-29 727.366 2024-11-30 724.419 2024-12-01 787.632 2024-12-02 587.710   On the other hand, when I got to Apps > CMC > License usage > Ingest, and fetch the data for "last 7 days" (same as above) I get the following table: _time GB 2024-11-25 851.012 2024-11-26 877.134 2024-11-27 872.973 2024-11-28 949.041 2024-11-29 939.627 2024-11-30 835.154 2024-12-01 955.316 2024-12-02 963.486   As you can see, there is a considerable mismatch between both results. So here's where I'm at a crossroad because I don't know which one should I trust. Based on previous topics, I notice the above query has been recommended before, even in posts from 2024. I don't know if this is related to my user not having the appropriate capabilities or whatnot, but any insights about this issue are greatly appreciated. Cheers, everyone.
Wondering if this will work for you. It puts both datasets in the outer query. The first stats will pull all fields together by TraceID, then the where will remove those without data. The @t will co... See more...
Wondering if this will work for you. It puts both datasets in the outer query. The first stats will pull all fields together by TraceID, then the where will remove those without data. The @t will contain multivalue dates which will get converted and then your next stats will collapse any duplicates. (index=test OR index=test2 source="insertpath" ErrorCodesResponse=TestError TraceId=*) OR (index=test "Test SKU" AND @MT !="TestAsync: Request(Test SKU: )*") | fields TraceId, @t, @MT, RequestPath | stats values(*) as * by TraceId | where isnotnull('@t') AND isnotnull('@mt') AND match('@mt', "Test SKU: *") | eval date=strftime(strptime('@t', "%Y-%m-%dT%H:%M:%S.%6N%Z"), "%Y-%m-%d"), time=strftime(strptime('@t', "%Y-%m-%dT%H:%M:%S.%6N%Z"), "%H:%M") | stats values(date) as date values(time) as time values(@mt) as message values(RequestPath) as Path by TraceId | where isnotnull(date) AND isnotnull(time) AND isnotnull(message) | table date, time, TraceId, message, Path  There may be more optimisations depending on your data.
Hi there are at least this one https://splunkbase.splunk.com/app/5927. Not exactly what you are looking for, but probably it gives you some ideas how to do it. Basically you can do it as you said a... See more...
Hi there are at least this one https://splunkbase.splunk.com/app/5927. Not exactly what you are looking for, but probably it gives you some ideas how to do it. Basically you can do it as you said alert action (could be an issue, if you want sent lot of data?). Another way is to create a custom command and use it. But If. you have lot of data to export, then maybe easiest way to go is just create saved search, call it with splunk rest api with some other job management software/system which then send it forward. r. Ismo
One way using stats, which will be efficient | makeresults | eval new_set="A,B,C" | makemv delim="," new_set | append [| makeresults | eval baseline="X,Y,Z" ] | makemv delim="," baseline ``` Join ro... See more...
One way using stats, which will be efficient | makeresults | eval new_set="A,B,C" | makemv delim="," new_set | append [| makeresults | eval baseline="X,Y,Z" ] | makemv delim="," baseline ``` Join rows together ``` | stats values(*) as * ``` Expand out the baseline data ``` | stats values(*) as * by baseline ``` Collect combinations ``` | eval combinations=mvmap(new_set, new_set. "-". baseline) ``` and combine again ``` | stats values(combinations) as combinations It relies on the expansion of the MV using stats by baseline - which could also be done with mvexpand, not sure which one is more efficient.  
Not yet. I'm still discussing with support is this a bug or something else. Currently we are waiting (final?) answer from developers/PM to hear what are their plans for it.
Using the below sample search I'm trying to get every possible combination of results between two different sets of data and interested if there are any good techniques for doing so that are relative... See more...
Using the below sample search I'm trying to get every possible combination of results between two different sets of data and interested if there are any good techniques for doing so that are relatively efficient.  At least with the production data set I'm working with it should translate to about 40,000 results.  Below is just an example to make the data set easier to understand.  Thank you in advance for any assistance. Sample search | makeresults | eval new_set="A,B,C" | makemv delim="," new_set | append [| makeresults | eval baseline="X,Y,Z" ] | makemv delim="," baseline Output should be roughly in the format below and I'm stuck on getting the data manipulated in a way that aligns with the below. new_set - baseline -- A-X A-Y A-Z B-X B-Y B-Z C-X C-Y C-Z
Hey. Any updates regarding the bug? Found the same issue, using latest splunk (9.3.2)
When you run the command "netsh wlan show wlanreport", it does not only generate a HTML report, but also a xml report. This is good because the HTML report is intended for human consumption so Splunk... See more...
When you run the command "netsh wlan show wlanreport", it does not only generate a HTML report, but also a xml report. This is good because the HTML report is intended for human consumption so Splunk will not be happy with it. You can instead index the XML file. The XML file is at: C:\ProgramData\Microsoft\Windows\WlanReport\wlan-report-latest.xml To set up Splunk to generate and index this file once per hour, you need 3 configuration files: 1) A props.conf file on your indexer machine(s) # Put this in /opt/splunk/etc/apps/<yourappname>/local/props.conf [WlanReport] maxDist = 170 SHOULD_LINEMERGE = true BREAK_ONLY_BEFORE = <?xml version TIME_PREFIX = ReportDate> 2) A inputs.conf file on your forwarder machine(s) # Put this in /opt/splunkforwarder/etc/apps/<yourdeploymentappname>/local/inputs.conf [monitor://C:\ProgramData\Microsoft\Windows\WlanReport\wlan-report-latest.xml] index=main sourcetype=WlanReport disabled = 0 initCrcLength = 256 # You can use a scripted input to run the command once per X seconds specified by the interval [script://C:\Program Files\SplunkUniversalForwarder\etc\apps\<yourdeploymentappname>\bin\scripts\wlanreport.bat] interval = 3600 disabled = 0 # (I have trouble getting it to work with a relative path to the script) 3) The script file on your forwarder machine(s): # Put this in /opt/splunk/etc/apps/<yourdeploymentappname>/bin/wlanreport.bat @echo off netsh wlan show wlanreport   You will then have events coming in containing the XML file contents, every hour.
Sorry for delayed response, holidays got in the way. I ran "splunk btool server list sslConfig" and it returned no data.  I tried it without sslconfig and searched for that cert name and nothing Wh... See more...
Sorry for delayed response, holidays got in the way. I ran "splunk btool server list sslConfig" and it returned no data.  I tried it without sslconfig and searched for that cert name and nothing When I run openssl.exe x509 -enddate -noout -text -in "c:\programs files\splunk\etc\auth\server_pkcs1.pem" it shows as the issuer being Splunk.