All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I've got a question about lookup tables, and how to audit them. I have a rather large lookup table that's being recreated daily from a scheduled correlation search. I don't know if any other correl... See more...
I've got a question about lookup tables, and how to audit them. I have a rather large lookup table that's being recreated daily from a scheduled correlation search. I don't know if any other correlation searches or anything is actually using that lookup table. I wanted to see if there was a way to audit it's use so I can delete the table, and remove the correlation search if needed.
Hello, I wanted to filter Cisco ISE Logging Activities by authentication, authorization, and accounting. So far, I've been able to filter by Accounting, but not the other two. In Cisco ISE Logging... See more...
Hello, I wanted to filter Cisco ISE Logging Activities by authentication, authorization, and accounting. So far, I've been able to filter by Accounting, but not the other two. In Cisco ISE Logging, TACACS Accounting has been linked to splunk server. And I am running the following filter: <ISE Server> AND "CISE_TACACS_Accounting" AND Type=Accounting | stats count(_time) by _time. Any advice is much appreciated. Thanks.      
Hi, Windows UF stopped sending events. I saw this event in _internal index 'message from ""C:\Program Files\SplunkUniversalForwarder\bin\splunk-wmi.exe"" Clean shutdown completed.' The UF version ... See more...
Hi, Windows UF stopped sending events. I saw this event in _internal index 'message from ""C:\Program Files\SplunkUniversalForwarder\bin\splunk-wmi.exe"" Clean shutdown completed.' The UF version is 9.2.1 What does it mean, and how to avoid it from happening again?  
Dear Splunk Support, I am encountering an issue while configuring Splunk to filter logs based on specific ports (21, 22, 23, 3389) using props.conf and transforms.conf. Despite following the proper ... See more...
Dear Splunk Support, I am encountering an issue while configuring Splunk to filter logs based on specific ports (21, 22, 23, 3389) using props.conf and transforms.conf. Despite following the proper configuration steps, the filtering is not working as expected. Below are the details: System Details: Splunk Version: [9.3.2] Deployment Type: Heavy Forwarder Log Source: /opt/log/indexsource/* Configuration Applied: props.conf (Located at $SPLUNK_HOME/etc/system/local/props.conf) [source::/opt/log/indexsource/*] TRANSFORMS-filter_ports = filter_specific_ports transforms.conf (Located at $SPLUNK_HOME/etc/system/local/transforms.conf) [filter_specific_ports] REGEX = .* (21|22|23|3389) .* DEST_KEY = queue FORMAT = indexQueue And trying someways such as: transforms.conf: [filter_ports] REGEX = (21|22|23|3389) DEST_KEY = queue FORMAT = indexQueue [drop_other_ports] REGEX = . DEST_KEY = queue FORMAT = nullQueueAnd AND props.conf: [source::your_specific_source] TRANSFORMS-filter_ports = allow_ports, drop_other_ports transforms.conf: [allow_ports] REGEX = (21|22|23|3389) DEST_KEY = _MetaData:Index FORMAT = your_index_name [drop_other_ports] REGEX = . DEST_KEY = queue FORMAT = nullQueue Issue Observed: The expected behavior is that logs containing these ports should be routed to indexQueue, but they are not being filtered as expected. All logs are still being indexed in the default index. Checked for syntax errors and restarted Splunk, but the issue persists. Troubleshooting Steps Taken: Verified Regex: Confirmed that the regex .* (21|22|23|3389) .* correctly matches log lines using regex testing tools. Checked Splunk Logs: Looked for errors in $SPLUNK_HOME/var/log/splunk/splunkd.log but found no related warnings. Restarted Splunk: Restarted the service after configuration changes using splunk restart. Checked Events in Splunk: Ran searches to confirm that logs with these ports were still being indexed.   Request for Assistance: Could you please advise on: Whether there are any syntax issues in my configuration? If additional debugging steps are needed? Alternative methods to ensure only logs containing ports 21, 22, 23, and 3389 are routed correctly? Your assistance in resolving this issue would be greatly appreciated. Best regards, Namchin Baranzad Information Security Analyst M Bank Email: namchin.b@m-bank.mn
Hello, I'm building a search to get alerted when we go over the license. I have a search that is working well to get the license usage and get alert when it goes over the 300GB license: index=_in... See more...
Hello, I'm building a search to get alerted when we go over the license. I have a search that is working well to get the license usage and get alert when it goes over the 300GB license: index=_internal source=*license_usage.log type=Usage pool=* | eval _time=strftime(_time,"%m-%d-%y") | stats sum(b) as ub by _time | eval ub=round(ub/1024/1024/1024,3) | eval _time=strptime(_time,"%m-%d-%y") | sort _time | eval _time=strftime(_time,"%m-%d-%y") | rename _time as Date ub as "Daily License Quota Used" | where 'Daily License Quota Used' > 300 But here as you can see I did set the 300GB limit manually in the search. Is there a way to get this info from Splunk directly ? I see that in the CMC it use `sim_licensing_limit` to get the info, but this doesn't work when doing a search outside the CMC. Thanks ! Lucas
index=aws* Method response body after transformations: sourcetype="aws:apigateway" business_unit=XX aws_account_alias="xXXX" network_environment=test source="API-Gateway-Execution-Logs*" | rex field=... See more...
index=aws* Method response body after transformations: sourcetype="aws:apigateway" business_unit=XX aws_account_alias="xXXX" network_environment=test source="API-Gateway-Execution-Logs*" | rex field=_raw "Method response body after transformations: (?<json>[^$]+)" | spath input=json path="header.messageGUID" output=messageGUID | spath input=json path="payload.statusType.code" output=status | spath input=json path="payload.statusType.text" output=text | where status=200 | rename _time as request_time | fieldformat request_time=strftime(request_time, "%F %T") | join type=inner messageGUID [ search kubernetes_cluster="eks-XXXXX*" index="awsXXXX" sourcetype = "kubernetes_logs" source = *XXXX* "sendData" | rex field=_raw "sendData: (?<json>[^$]+)" | spath input=json path="header.messageGUID" output=messageGUID | table messageGUID, _time ] |table messageGUID, request_time, _time   _time is coming as Null as output    Also how can I rename this field also ?      
I have a query which is a lookup and I have assigned the out to "Report" as I want to send the entirety of the report via teams but im struggling to send it as a table its just the entire and its not... See more...
I have a query which is a lookup and I have assigned the out to "Report" as I want to send the entirety of the report via teams but im struggling to send it as a table its just the entire and its not readable. here's the output in teams and this is my query  index="acoe_bot_events" unique_id = * |lookup "LU_ACOE_RDA_Tracker" ID AS unique_id |search Business_Area_Level_2="Client Solutions Insurance" , Category="*", Business_Unit = "*", Analyst_Responsible = "*", Process_Name = "*" |eval STP=(passed/heartbeat)*100 |eval Hours=(passed*Standard_Working_Time)/60 |eval FTE=(Hours/127.5) |eval Benefit=(passed*Standard_Working_Time*Benefit_Per_Minute) |stats sum(heartbeat) as Volumes sum(passed) as Successful avg(STP) as Average_STP,sum(FTE) as FTE_Saved, sum(Hours) as Hours_Saved, sum(Benefit) as Rand_Benefit by Process_Name, Business_Unit, Analyst_Responsible |foreach * [eval FTE_Saved=round('FTE_Saved',3)] |foreach * [eval Hours_Saved=round('Hours_Saved',3)] |foreach * [eval Rand_Benefit=round('Rand_Benefit',2)] |foreach * [eval Average_STP=round('Average_STP',2)] | eval row = Process_Name . "|" . Analyst_Responsible . "|" . Business_Unit . "|" . Volumes . "|" . Successful . "|" . Average_STP | stats values(row) AS report | eval report = mvjoin(report, " ")
We are installing modular input (akamai add-on) to get akamai logs to Splunk. In our environment, we have kept modular input in DS under deployment apps and pushed it to HF using serverclass. Is thi... See more...
We are installing modular input (akamai add-on) to get akamai logs to Splunk. In our environment, we have kept modular input in DS under deployment apps and pushed it to HF using serverclass. Is this the issue? Do modular inputs directly needs to be installed on HF rather than pushing from DS?  Because we are configuring data input on HF (which is pushed frm DS) and when saving it is throwing 404 error action forbidden. When we directly install it on HF it is getting saved perfectly. Almost in our environment all apps will be pushed from DS to CM and DS to Deployer even though they are not modular inputs contains just configs but as of now worked good. Is it a bad practice in case of modular input?  Please guide me      
Hi  I have a dasahboard below code currently it gets result whenever the env dropdown is selected either test or prod.My use case is once select env from drop down+select data entity+time ...then ... See more...
Hi  I have a dasahboard below code currently it gets result whenever the env dropdown is selected either test or prod.My use case is once select env from drop down+select data entity+time ...then when i hit submit button.It should fetch resu;t based on all these selection done <form version="1.1" theme="dark"> <label>Metrics222</label> <fieldset> <input type="dropdown" token="indexToken1" searchWhenChanged="false"> <label>Environment</label> <choice value="prod-,prod,*">PROD</choice> <choice value="np-,test,*">TEST</choice> <change> <eval token="stageToken">mvindex(split($value$,","),1)</eval> <eval token="indexToken">mvindex(split($value$,","),0)</eval> </change> </input> <input type="dropdown" token="entityToken" searchWhenChanged="false"> <label>Data Entity</label> <choice value="*">ALL</choice> </input> <input type="time" token="timeToken" searchWhenChanged="false"> <label>Time</label> <default> <earliest>-24h@h</earliest> <latest>now</latest> </default> </input> </fieldset> <row> <panel> <html id="APIStats"> <style> #user{ text-align:center; color:#BFFF00; } </style> <h2 id="user">API USAGE STATISTICS</h2> </html> </panel> </row> <row> <panel> <table> <title>Unique User / Unique Client</title> <search> <query>index=$indexToken$ AND source="/aws/lambda/g-lambda-au-$stageToken$" | stats dc(claims.sub) as "Unique Users", dc(claims.cid) as "Unique Clients" BY claims.cid claims.groups{} | rename claims.cid AS app, claims.groups{} AS groups | table app "Unique Users" "Unique Clients" groups</query> <earliest>$timeToken.earliest$</earliest> <latest>$timeToken.latest$</latest> </search> <option name="drilldown">none</option> </table> </panel> </row> <row> <panel> <html id="nspCounts"> <style> #user{ text-align:center; color:#BFFF00; } </style> <h2 id="user">NSP STREAM STATISTICS</h2> </html> </panel> </row> <row> <panel> <table> <title>Unique Consumer</title> <search> <query>index="np" source="**" | spath path=$stageToken$.nsp3s{} output=nsp3s | sort -_time | head 1 | mvexpand nsp3s | spath input=nsp3s path=Name output=Name | spath input=nsp3s path=DistinctAdminUserCount output=DistinctAdminUserCount | search Name="*costing*" | table Name, DistinctAdminUserCount</query> <earliest>$timeToken.earliest$</earliest> <latest>$timeToken.latest$</latest> </search> <option name="drilldown">none</option> </table> </panel> <panel> <table> <title>Event Processed</title> <search> <query>index="$indexToken$" source="/aws/lambda/publish-$entityToken$-$stageToken$-nsp" "success Published to NSP3 objectType*" | rex field=msg "objectType\s*:\s*(?&lt;objectType&gt;[^\s]+)" | stats count by objectType</query> <earliest>$timeToken.earliest$</earliest> <latest>$timeToken.latest$</latest> </search> <option name="drilldown">none</option> <option name="refresh.display">progressbar</option> </table> </panel> <panel> <table> <title>Number of Errors</title> <search> <query>index="$indexToken$" source="/aws/lambda/publish-$entityToken$-$stageToken$-nsp" "error*" | stats count</query> <earliest>$timeToken.earliest$</earliest> <latest>$timeToken.latest$</latest> </search> <option name="drilldown">none</option> <option name="refresh.display">progressbar</option> </table> </panel> </row> <row> <panel> <title>API : Data/Search Count</title> <html id="errorcount5"> <style> #user{ text-align:center; color:#BFFF00; } </style> <h2 id="user"> API COUNT STATISTICS</h2> </html> </panel> </row> <row> <panel> <title>Total Request Data</title> <table> <search> <query>(index=$indexToken$ source="/aws/lambda/api-data-$stageToken$-$entityToken$" OR source="/aws/lambda/api-commands-$stageToken$-*") ge:*:init:*:invoke | spath path=event.path output=path | spath path=event.httpMethod output=http | eval Path=http + " " + path |stats count by Path</query> <earliest>$timeToken.earliest$</earliest> <latest>$timeToken.latest$</latest> <refresh>60m</refresh> <refreshType>delay</refreshType> </search> <option name="drilldown">none</option> <option name="refresh.display">progressbar</option> </table> </panel> <panel> <title>Total Request Search</title> <table> <search>rliest&gt;<query>index=$indexToken$ source IN ("/aws/lambda/api-search-$stageToken$-$entityToken$") ge:init:*:invoke | spath path=path output=path | spath path=httpMethod output=http | eval Path=http + " " + path |stats count by Path</query> <earliest>$timeToken.earliest$</earliest> <latest>$timeToken.latest$</latest> </search> <option name="drilldown">none</option> <option name="refresh.display">progressbar</option> </table> </panel> <panel> <title>Total Error Count :</title> <table> <search>rliest&gt;<query>index=$indexToken$ source IN ("/aws/lambda/api-search-$stageToken$-$entityToken$") msg="error*" (error.status=4* OR error.status=5*) | eval status=case(like(error.status, "4%"), "4xx", like(error.status, "5%"), "5xx") | stats count by error.status</query> <earliest>$timeToken.earliest$</earliest> <latest>$timeToken.latest$</latest> </search> <option name="drilldown">none</option> <option name="refresh.display">progressbar</option> </table> </panel> <panel> <title>Response Time Count in ms</title> <table> <search>rliest&gt;<query>index=np-papi source IN ("/aws/lambda/api-search-test-*") "ge:init:search:response" | stats sum(responseTime) as TotalResponseTime, avg(responseTime) as AvgResponseTime | eval API="Search API" | eval TotalResponseTime = TotalResponseTime . " ms" | eval AvgResponseTime = round(AvgResponseTime, 2) . " ms" | table API, TotalResponseTime, AvgResponseTime | append [ search index=np-papi source IN ("/aws/lambda/api-data-test-*") msg="ge:init:data:*" | stats sum(responseTime) as TotalResponseTime, avg(responseTime) as AvgResponseTime | eval API="DATA API" | eval TotalResponseTime = TotalResponseTime . " ms" | eval AvgResponseTime = round(AvgResponseTime, 2) . " ms" | table API, TotalResponseTime, AvgResponseTime ] | table API, TotalResponseTime, AvgResponseTime</query> <earliest>$timeToken.earliest$</earliest> <latest>$timeToken.latest$</latest> </search> <option name="drilldown">none</option> <option name="refresh.display">progressbar</option> </table> </panel> </row> <row> <panel> <html id="errorcount16"> <style> #user{ text-align:center; color:#BFFF00; } </style> <h2 id="user">Request per min</h2> </html> </panel> </row> <row> <panel> <table> <search> <query>index=$indexToken$ source IN ("/aws/lambda/api-data-$stageToken$-$entityToken$","/aws/lambda/api-search-$stageToken$-$entityToken$") "ge:init:*:*" | timechart span=1m count by source | untable _time source count | stats sum(count) as TotalCount, avg(count) as AvgCountPerMin by source | eval AvgCountPerMin = round(AvgCountPerMin, 2) | eval source = if(match(source, "api-data-test-(.*)"), replace(source, "/api-data-test-(.*)", "data-\\1"), if(match(source, "/aws/lambda/api-data-prod-(.*)"), replace(source, "/aws/lambda/api-data-prod-(.*)", "data-\\1"), if(match(source, "/aws/lambda/api-search-test-(.*)"), replace(source, "/aws/lambda/api-search-test-(.*)", "search-\\1"), replace(source, "/aws/lambdaapi-search-prod-(.*)", "search-\\1")))) | table source, TotalCount, AvgCountPerMin</query> <earliest>$timeToken.earliest$</earliest> <latest>$timeToken.latest$</latest> </search> <option name="drilldown">none</option> <option name="refresh.display">progressbar</option> </table> </panel> </row> <row> <panel> <title>SLA % :DATA API</title> <table> <search> <query>index=$indexToken$ source IN ("/aws/lambdaapi-data-$stageToken$-$entityToken$") "ge:init:data:responseTime" | eval SLA_threshold = 113 | eval SLA_compliant = if(responseTime &lt;= SLA_threshold, 1, 0) | stats count as totalRequests, sum(SLA_compliant) as SLA_passed by source | eval SLA_percentage = round((SLA_passed / totalRequests) * 100, 2) | eval API = "DATA API" | table source, SLA_percentage, totalRequests, SLA_passed</query> <earliest>$timeToken.earliest$</earliest> <latest>$timeToken.latest$</latest> <refresh>60m</refresh> <refreshType>delay</refreshType> </search> <option name="drilldown">none</option> <option name="refresh.display">progressbar</option> </table> </panel> <panel> <title>SLA % :SEARCH API</title> <table> <search> <query>index=$indexToken$ source IN ("/aws/lambda/api-search-$stageToken$-$entityToken$") "ge:init:search:response:time" | eval SLA_threshold = 100 | eval SLA_compliant = if(responseTime &lt;= SLA_threshold, 1, 0) | stats count as totalRequests, sum(SLA_compliant) as SLA_passed by source | eval SLA_percentage = round((SLA_passed / totalRequests) * 100, 2) | eval API = "SEARCH API" | eval source = if(match(source, "/aws/lambda/api-search-test-(.*)"), replace(source, "/aws/lambda\api-search-test-(.*)", "search-\\1"), replace(source, "/aws/lambda/api-search-prod-(.*)", "search-\\1")) | table source, SLA_percentage, totalRequests, SLA_passed</query> <earliest>$timeToken.earliest$</earliest> <latest>$timeToken.latest$</latest> <refresh>60m</refresh> <refreshType>delay</refreshType> </search> <option name="drilldown">none</option> <option name="refresh.display">progressbar</option> </table> </panel> </row> </form>  
I've noticed an issue with one of my syslog indexes. I have a syslog server centralizing and forwarding syslogs for 6 different indexes. Not too long ago, I modified one of the indexes to extract fie... See more...
I've noticed an issue with one of my syslog indexes. I have a syslog server centralizing and forwarding syslogs for 6 different indexes. Not too long ago, I modified one of the indexes to extract fields at the UF instead of the indexer (this solved another problem that is not relevant here; I can provide detail if it becomes relevant). I noticed that occasionally, that index that is extracting fieldnames at the UF stops sending while the others are sending. The only thing that reliably gets it sending again is restarting the Splunk service on the UF. I'm newish to Splunk so I'm sure I am not troubleshooting all the things I should be. The one thing I noticed is right before the index stops sending, I see errors in the _internal host index  ERROR TailReader [2264207 tailreader0] - Ignoring path="<path/to/log/syslog.log>" due to: Bug during applyPendingMetadata, header processor does not own the indexed extractions confs.   Based on some research here I think I've discovered the problem but I need confirmation before I start making changes. I added the following fields for the sourcetypes:  Forwarder: props.conf: [sourcetype:here] FIELD_DELIMITER = whitespace FIELD_NAMES = field1,field2,field3,etc...   I think the problem is I did not specify  ... INDEXED_EXTRACTIONS = W3C ...   So my question is, do I need the INDEXED_EXTRACTIONS parameter if I use the FIELD_DELIMITER & FIELD_NAMES or can those be used without it? I believe this is what is missing and causing Splunk to periodically stop processing the file. If I do not need it, then I would need to search for a different cause. Thanks in advance for your help.
Hello guys, I'd like to hide external border lines of a panel when I hover over each panel.  By default, external border lines appear on a mouseover. Can anyone help me disabling this default f... See more...
Hello guys, I'd like to hide external border lines of a panel when I hover over each panel.  By default, external border lines appear on a mouseover. Can anyone help me disabling this default feature (even small tooltip can also be disabled if possible)? Thanks
This has been scratching my head. I'm working on dashboards on user activity on our application. Multiple dashboards I am working I'm trying to report when an external document was downloaded that is... See more...
This has been scratching my head. I'm working on dashboards on user activity on our application. Multiple dashboards I am working I'm trying to report when an external document was downloaded that is tied to the specific module of our application. Our application utilizes uses SQL. When I download an external document in our application, it generates a total of 12 separate events.  I set up field extractions on the DocumentId, DocumentTypeId, and DocmentFileTypeId.  What I am trying to do with the 12 separate events is to show in one event with the DocumentTypeId DocumentId and DocumentFileTypeId (not as much needed). The DocumentTypeId is my primary event since it's tied to the dashboard for the specific module followed by the DocumentId. The DocumentTypeId tells me what the file is pertaining to the system, DocumentFileTypeId if it's a PDF or a different type, and the DocumentTypeId is the document number in SQL. For the time, I am using _time instead of accessDate in the events. All 12 was from the single download even with the three accessDate times.  Tried a few different commands and searches now and just can't seem to get it to report correctly. When I think I have it correct, it's not the case when I expand the search range where for example all my Document Ids would also list the same DocumentTypeId which I verified against the DB to be incorrect when I used the join command.  This is the raw events that were exported and sanitized for the post. The events have a valid _json. T {"auditResultSets":null,"schema":"ref","storedProcedureName":"DocumentFileTypeGetById","commandText":"ref.DocumentFileTypeGetById","Locking":null,"commandType":4,"parameters":[{"name":"@RETURN_VALUE","value":0},{"name":"@DocumentFileTypeId","value":7}],"serverIPAddress":"000.000.000.000","serverHost":"Webserver","clientIPAddress":"000.000.000.000","sourceSystem":"WebSite","module":"Vendor.Product.BLL.DocumentManagement","accessDate":"2025-03-21T16:37:14.8614186-06:00","userId":0000,"userName":"username","traceInformation":[{"type":"Page","class":"Vendor.Product.Web.UI.Website.DocumentManagement.ViewDocument","method":"Page_Load"},{"type":"Manager","class":"Vendor.Product.BLL.DocumentManagement.DocumentManager","method":"Get"}]} {"auditResultSets":null,"schema":"ref","storedProcedureName":"DocumentAttributeGetByDocumentTypeId","commandText":"ref.DocumentAttributeGetByDocumentTypeId","Locking":null,"commandType":4,"parameters":[{"name":"@RETURN_VALUE","value":0},{"name":"@DocumentTypeId","value":00},{"name":"@IncludeInactive","value":false}],"serverIPAddress":"000.000.000.000","serverHost":"Webserver","clientIPAddress":"000.000.000.000","sourceSystem":"WebSite","module":"Vendor.Product.BLL.DocumentManagement","accessDate":"2025-03-21T16:37:14.8614186-06:00","userId":0000,"userName":"username","traceInformation":[{"type":"Page","class":"Vendor.Product.Web.UI.Website.DocumentManagement.ViewDocument","method":"Page_Load"},{"type":"Manager","class":"Vendor.Product.BLL.DocumentManagement.DocumentManager","method":"Get"}]} {"auditResultSets":null,"schema":"ref","storedProcedureName":"DocumentDetailGetByParentId","commandText":"ref.DocumentDetailGetByParentId","Locking":null,"commandType":4,"parameters":[{"name":"@RETURN_VALUE","value":0},{"name":"@DocumentId","value":000000}],"serverIPAddress":"000.000.000.000","serverHost":"Webserver","clientIPAddress":"000.000.000.000","sourceSystem":"WebSite","module":"Vendor.Product.BLL.DocumentManagement","accessDate":"2025-03-21T16:37:14.8614186-06:00","userId":0000,"userName":"username","traceInformation":[{"type":"Page","class":"Vendor.Product.Web.UI.Website.DocumentManagement.ViewDocument","method":"Page_Load"},{"type":"Manager","class":"Vendor.Product.BLL.DocumentManagement.DocumentManager","method":"Get"}]} {"auditResultSets":null,"schema":"ref","storedProcedureName":"DocumentStatusHistoryGetByFK","commandText":"ref.DocumentStatusHistoryGetByFK","Locking":null,"commandType":4,"parameters":[{"name":"@RETURN_VALUE","value":0},{"name":"@DocumentVersionId","value":000000},{"name":"@IncludeInactive","value":""}],"serverIPAddress":"000.000.000.000","serverHost":"Webserver","clientIPAddress":"000.000.000.000","sourceSystem":"WebSite","module":"Vendor.Product.BLL.DocumentManagement","accessDate":"2025-03-21T16:37:14.8614186-06:00","userId":0000,"userName":"username","traceInformation":[{"type":"Page","class":"Vendor.Product.Web.UI.Website.DocumentManagement.ViewDocument","method":"Page_Load"},{"type":"Manager","class":"Vendor.Product.BLL.DocumentManagement.DocumentManager","method":"Get"}]} {"auditResultSets":null,"schema":"ref","storedProcedureName":"DocumentVersionGetByFK","commandText":"ref.DocumentVersionGetByFK","Locking":null,"commandType":4,"parameters":[{"name":"@RETURN_VALUE","value":0},{"name":"@DocumentId","value":000000}],"serverIPAddress":"000.000.000.000","serverHost":"Webserver","clientIPAddress":"000.000.000.000","sourceSystem":"WebSite","module":"Vendor.Product.BLL.DocumentManagement","accessDate":"2025-03-21T16:37:14.8614186-06:00","userId":0000,"userName":"username","traceInformation":[{"type":"Page","class":"Vendor.Product.Web.UI.Website.DocumentManagement.ViewDocument","method":"Page_Load"},{"type":"Manager","class":"Vendor.Product.BLL.DocumentManagement.DocumentManager","method":"Get"}]} {"auditResultSets":null,"schema":"ref","storedProcedureName":"DocumentLinkGetByFK","commandText":"ref.DocumentLinkGetByFK","Locking":null,"commandType":4,"parameters":[{"name":"@RETURN_VALUE","value":0},{"name":"@DocumentId","value":000000}],"serverIPAddress":"000.000.000.000","serverHost":"Webserver","clientIPAddress":"000.000.000.000","sourceSystem":"WebSite","module":"Vendor.Product.BLL.DocumentManagement","accessDate":"2025-03-21T16:37:14.8614186-06:00","userId":0000,"userName":"username","traceInformation":[{"type":"Page","class":"Vendor.Product.Web.UI.Website.DocumentManagement.ViewDocument","method":"Page_Load"},{"type":"Manager","class":"Vendor.Product.BLL.DocumentManagement.DocumentManager","method":"Get"}]} {"auditResultSets":null,"schema":"ref","storedProcedureName":"DocumentGetById","commandText":"ref.DocumentGetById","Locking":null,"commandType":4,"parameters":[{"name":"@RETURN_VALUE","value":0},{"name":"@DocumentId","value":000000}],"serverIPAddress":"000.000.000.000","serverHost":"Webserver","clientIPAddress":"000.000.000.000","sourceSystem":"WebSite","module":"Vendor.Product.BLL.DocumentManagement","accessDate":"2025-03-21T16:37:14.8457543-06:00","userId":0000,"userName":"username","traceInformation":[{"type":"Page","class":"Vendor.Product.Web.UI.Website.DocumentManagement.ViewDocument","method":"Page_Load"},{"type":"Manager","class":"Vendor.Product.BLL.DocumentManagement.DocumentManager","method":"Get"}]} {"auditResultSets":null,"schema":"ref","storedProcedureName":"DocumentFileTypeGetById","commandText":"ref.DocumentFileTypeGetById","Locking":null,"commandType":4,"parameters":[{"name":"@RETURN_VALUE","value":0},{"name":"@DocumentFileTypeId","value":7}],"serverIPAddress":"000.000.000.000","serverHost":"Webserver","clientIPAddress":"000.000.000.000","sourceSystem":"WebSite","module":"Vendor.Product.BLL.DocumentManagement","accessDate":"2025-03-21T16:37:14.736377-06:00","userId":0000,"userName":"username","traceInformation":[{"type":"Page","class":"Vendor.Product.Web.UI.Website.DocumentManagement.DocumentManagementMain","method":"ViewDocument"},{"type":"Manager","class":"Vendor.Product.BLL.DocumentManagement.DocumentManager","method":"GetLatestDocumentwithoutAttributes"}]} {"auditResultSets":null,"schema":"ref","storedProcedureName":"DocumentStatusHistoryGetByFK","commandText":"ref.DocumentStatusHistoryGetByFK","Locking":null,"commandType":4,"parameters":[{"name":"@RETURN_VALUE","value":0},{"name":"@DocumentVersionId","value":000000},{"name":"@IncludeInactive","value":""}],"serverIPAddress":"000.000.000.000","serverHost":"Webserver","clientIPAddress":"000.000.000.000","sourceSystem":"WebSite","module":"Vendor.Product.BLL.DocumentManagement","accessDate":"2025-03-21T16:37:14.736377-06:00","userId":0000,"userName":"username","traceInformation":[{"type":"Page","class":"Vendor.Product.Web.UI.Website.DocumentManagement.DocumentManagementMain","method":"ViewDocument"},{"type":"Manager","class":"Vendor.Product.BLL.DocumentManagement.DocumentManager","method":"GetLatestDocumentwithoutAttributes"}]} {"auditResultSets":null,"schema":"ref","storedProcedureName":"DocumentVersionGetByFK","commandText":"ref.DocumentVersionGetByFK","Locking":null,"commandType":4,"parameters":[{"name":"@RETURN_VALUE","value":0},{"name":"@DocumentId","value":000000}],"serverIPAddress":"000.000.000.000","serverHost":"Webserver","clientIPAddress":"000.000.000.000","sourceSystem":"WebSite","module":"Vendor.Product.BLL.DocumentManagement","accessDate":"2025-03-21T16:37:14.736377-06:00","userId":0000,"userName":"username","traceInformation":[{"type":"Page","class":"Vendor.Product.Web.UI.Website.DocumentManagement.DocumentManagementMain","method":"ViewDocument"},{"type":"Manager","class":"Vendor.Product.BLL.DocumentManagement.DocumentManager","method":"GetLatestDocumentwithoutAttributes"}]} {"auditResultSets":null,"schema":"ref","storedProcedureName":"DocumentLinkGetByFK","commandText":"ref.DocumentLinkGetByFK","Locking":null,"commandType":4,"parameters":[{"name":"@RETURN_VALUE","value":0},{"name":"@DocumentId","value":000000}],"serverIPAddress":"000.000.000.000","serverHost":"Webserver","clientIPAddress":"000.000.000.000","sourceSystem":"WebSite","module":"Vendor.Product.BLL.DocumentManagement","accessDate":"2025-03-21T16:37:14.736377-06:00","userId":0000,"userName":"username","traceInformation":[{"type":"Page","class":"Vendor.Product.Web.UI.Website.DocumentManagement.DocumentManagementMain","method":"ViewDocument"},{"type":"Manager","class":"Vendor.Product.BLL.DocumentManagement.DocumentManager","method":"GetLatestDocumentwithoutAttributes"}]} {"auditResultSets":null,"schema":"ref","storedProcedureName":"DocumentGetById","commandText":"ref.DocumentGetById","Locking":null,"commandType":4,"parameters":[{"name":"@RETURN_VALUE","value":0},{"name":"@DocumentId","value":000000}],"serverIPAddress":"000.000.000.000","serverHost":"Webserver","clientIPAddress":"000.000.000.000","sourceSystem":"WebSite","module":"Vendor.Product.BLL.DocumentManagement","accessDate":"2025-03-21T16:37:14.736377-06:00","userId":0000,"userName":"username","traceInformation":[{"type":"Page","class":"Vendor.Product.Web.UI.Website.DocumentManagement.DocumentManagementMain","method":"ViewDocument"},{"type":"Manager","class":"Vendor.Product.BLL.DocumentManagement.DocumentManager","method":"GetLatestDocumentwithoutAttributes"}]}
I'm looking for training that would cover at when deploying a TA if it would have to go to the indexer level rather than the HF or the search head.  I know the HF usually gets the "Add-on" version of... See more...
I'm looking for training that would cover at when deploying a TA if it would have to go to the indexer level rather than the HF or the search head.  I know the HF usually gets the "Add-on" version of a TA just ran across certain circumstance recently where I was told I'd have to deploy a TA to the indexer level. I know there's the Splunk Certified Admin training on Udemy.  There's also the Splunk Enterprise Certified Admin training directly from Splunk.  Would either of those or something else cover what I'm looking for.   thanks    
Hello all, I am trying to understand the type of fields command. Documentation says it is a "distributable streaming" which means  it can be run on the indexer, which improves processing time.  ... See more...
Hello all, I am trying to understand the type of fields command. Documentation says it is a "distributable streaming" which means  it can be run on the indexer, which improves processing time.  If I use fields command to specify fields which are extracted in the search head (using field discovery for example) , how can it still considered as distributable streaming?  If I am not mistaken, field extraction on the indexers is possible using rex command or with indexed fields. Thank you in advance!
Hello, I'm having a problem with the colouring of a column in my table. I need to colour the AverageExecutionTime column according to the value of Treshold. If AverageExecutionTime > Treshold then ... See more...
Hello, I'm having a problem with the colouring of a column in my table. I need to colour the AverageExecutionTime column according to the value of Treshold. If AverageExecutionTime > Treshold then the AverageExecutionTime column is coloured red. If AverageExecutionTime < Treshold then the AverageExecutionTime column is coloured green. I've tried lots of things but it doesn't work, the conidition isn't respected, and AverageExutionTime is always coloured green.  The first line should be in red   <row> <panel> <title>XRT Execution Dashboard</title> <table> <search> <query>index="aws_app_corp-it_xrt" sourcetype="xrt_log" "OK/INFO - 1012550 - Total Calc Elapsed Time" | rex field=source "(?&lt;Datetime&gt;\d{8}_\d{6})_usr@(?&lt;Username&gt;[\w\.]+)_ses@\d+_\d+_MAXL#(?&lt;TemplateName&gt;\d+)_apd@(?&lt;ScriptName&gt;[\w]+)_obj#(?&lt;ObjectID&gt;[^.]+)\.msh\.log" | rex "Total Calc Elapsed Time\s*:\s*\[(?&lt;calc_time&gt;\d+\.\d+)\]\s*seconds" | stats avg(calc_time) as AverageExecutionTime max(calc_time) as MaxExecutionTime by ScriptName, ObjectID, TemplateName | eval AverageExecutionTime = round(AverageExecutionTime, 3) |lookup script_tresholds ObjectID ScriptName MaxLTemplate as "TemplateName" OUTPUT Threshold AS "Treshold" | table ScriptName, AverageExecutionTime, MaxExecutionTime, Treshold, ObjectID, TemplateName |search $ScriptName$ $ObjectID$ $TemplateName$ |sort - AverageExecutionTime</query> <earliest>$earliest$</earliest> <latest>$latest$</latest> </search> <!--format type="color" field="AverageExecutionTime"> <colorPalette type="expression"> <mapping field="AverageExecutionTime"> if(AverageExecutionTime > Treshold, "#D94E17", "#55C169") </mapping> </colorPalette> </format--> <!-- Mise en couleur conditionnelle --> <option name="count">100</option> <option name="drilldown">cell</option> <option name="refresh.display">progressbar</option> <format type="color" field="Color"> <colorPalette type="map">{"High":"#D94E17", "Low":"#55C169"}</colorPalette> </format> <drilldown> <condition field="ScriptName"> <link target="_blank">/app/search/dev_vwt_dashboards_uc31_details?ScriptName=$row.ScriptName$&amp;Script_Execution_Details=true&amp;earliest=$earliest$&amp;latest=$latest$</link> </condition> </drilldown> </table> </panel> </row> <row> <panel> <title>TEST XRT Execution Dashboard</title> <table> <search> <query>index="aws_app_corp-it_xrt" sourcetype="xrt_log" "OK/INFO - 1012550 - Total Calc Elapsed Time" | rex field=source "(?&lt;Datetime&gt;\d{8}_\d{6})_usr@(?&lt;Username&gt;[\w\.]+)_ses@\d+_\d+_MAXL#(?&lt;TemplateName&gt;\d+)_apd@(?&lt;ScriptName&gt;[\w]+)_obj#(?&lt;ObjectID&gt;[^.]+)\.msh\.log" | rex "Total Calc Elapsed Time\s*:\s*\[(?&lt;calc_time&gt;\d+\.\d+)\]\s*seconds" | stats avg(calc_time) as AverageExecutionTime max(calc_time) as MaxExecutionTime by ScriptName, ObjectID, TemplateName | eval AverageExecutionTime = round(AverageExecutionTime, 0) | lookup script_tresholds ObjectID ScriptName MaxLTemplate as "TemplateName" OUTPUT Threshold AS "Treshold" | eval colorCode = if(AverageExecutionTime > Treshold, "#D94E17", "#55C169") | table ScriptName, AverageExecutionTime, MaxExecutionTime, Treshold, ObjectID, TemplateName, colorCode | search $ScriptName$ $ObjectID$ | sort - AverageExecutionTime</query> <earliest>$earliest$</earliest> <latest>$latest$</latest> </search> <option name="refresh.display">progressbar</option> <format type="color" field="AverageExecutionTime"> <colorPalette type="expression">if(AverageExecutionTime &gt; Treshold,"#D94E17", "#55C169")</colorPalette> </format> </table> </panel> </row>
Unfortunately, we had some issue with a recent Splunk upgrade to 9.4.1 and had to roll back to 9.3.2. However, I had just built a dashboard in Studio 9.4.1with some drill down, but after the roll ba... See more...
Unfortunately, we had some issue with a recent Splunk upgrade to 9.4.1 and had to roll back to 9.3.2. However, I had just built a dashboard in Studio 9.4.1with some drill down, but after the roll back, now I just get this helpful message: "Layout undefined is not defined" Any ideas what was added to Studio in 9.4.x that wouldn't be compatible with 9.3.x? The only part of the dashboard that is loading is the time picker.  If I view the source, everything is still there.
As the title suggests, I am having multiple Universal Forwarders sharing the same Instance GUID due to the mistake of the owners of the servers, as they were just copying the .conf files from servers... See more...
As the title suggests, I am having multiple Universal Forwarders sharing the same Instance GUID due to the mistake of the owners of the servers, as they were just copying the .conf files from servers to servers. So my question is, what is the impact of this. I think missing the logs from these machines is one of the impact, as every 10-20 seconds, the IP and Hostname change, but the Instance GUID and Instance Name remains the same. And of course the obvious question, how do I fix this? Solving from the higher level of the system is prefer, like using the deployment server (which I can SSH into), since I have limited access to the servers, and probably have to TeamView them or write a guide for them. I read that the GUID is now in the `$SPLUNK_HOME/etc/instance.cfg` file, and there is probably a GUID there, which would be the same across the servers. Can I just delete the GUID line and restart the splunk service, and the deployment server would give it a new one? Can I delete the record from the Deployment Server UI and it would generated a new one and auto update the instance.cfg file of the Forwarder? I read the docs instance.cfg.conf - Splunk Documentation and it mention not to edit the file, so I am a bit confused. And I also saw that the docs mentioned that server.conf also has the GUID value, so do I have to do anything in the server.conf file?
Hello, I have a lookup table with fields user and src. I want to table results [user src] where the src within my search != the src listed within the lookup table. So first I need to search for ma... See more...
Hello, I have a lookup table with fields user and src. I want to table results [user src] where the src within my search != the src listed within the lookup table. So first I need to search for matching user rows, then I need to compare the src of the search with the src value in the lookup file. If the src is different, I want to table the new src value from the search. Can someone help me with this?  Thanks so very much.
As the title suggests, I am having multiple Universal Forwarders sharing the same Instance GUID due to the mistake of the owners of the servers, as they were just copying the .conf files from servers... See more...
As the title suggests, I am having multiple Universal Forwarders sharing the same Instance GUID due to the mistake of the owners of the servers, as they were just copying the .conf files from servers to servers. So my question is, what is the impact of this. I think missing the logs from these machines is one of the impact, as every 10-20 seconds, the IP and Hostname change, but the Instance GUID and Instance Name remains the same. And of course the obvious question, how do I fix this? Solving from the higher level of the system is prefer, like using the deployment server (which I can SSH into), since I have limited access to the servers, and probably have to TeamView them or write a guide for them. I read that the GUID is now in the `$SPLUNK_HOME/etc/instance.cfg` file, and there is probably a GUID there, which would be the same across the servers. Can I just delete the GUID line and restart the splunk service, and the deployment server would give it a new one? Can I delete the record from the Deployment Server UI and it would generated a new one and auto update the instance.cfg file of the Forwarder? I read the docs instance.cfg.conf - Splunk Documentation and it mention not to edit the file, so I am a bit confused. And I also saw that the docs mentioned that server.conf also has the GUID value, so do I have to do anything in the server.conf file?
The two raw results are as follows :  (1) EventType="Device" Event="InstallProfileConfirmed" User="sysadmin" EnrollmentUser="hasubram" DeviceFriendlyName="blabla MacBook Air macOS 15.3.2 Q6LW" Even... See more...
The two raw results are as follows :  (1) EventType="Device" Event="InstallProfileConfirmed" User="sysadmin" EnrollmentUser="hasubram" DeviceFriendlyName="blabla MacBook Air macOS 15.3.2 Q6LW" EventSource="Device" EventModule="Devices" EventCategory="Command" EventData="Profile=Apple macOS Apple Intelligence Restrictions" Event Timestamp: Mar 28 09:29:40 (2) EventType="Device" Event="DeviceOperatingSystemChanged" User="sysadmin" EnrollmentUser="hasubram" DeviceFriendlyName="blabla MacBook Air macOS 15.3.2 Q6LW" EventSource="Device" EventModule="Devices" EventCategory="Assignment" EventData="Device=75639" Event Timestamp: Mar 28 09:29:29 Hoping to combine a search to identify (1)‘s DeviceFriendlyName="blabla MacBook Air macOS 15.3.2 Q6LW" which is a shared key between two results , as long (2) happens before (1) from a chronological experience. I am already using the following to try and exclude certain results too :  Index=*** <<Search Parameters>> NOT  DeviceFriendlyName IN (*15.3.0*,*15.3.1*)   Thank you