All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Scheduled a PDF Delivery of a custom dashboard, but I can't seem to get the chart in the dashboard to fill the width of the page. I fiddled around with the Print button and that generates a PDF wher... See more...
Scheduled a PDF Delivery of a custom dashboard, but I can't seem to get the chart in the dashboard to fill the width of the page. I fiddled around with the Print button and that generates a PDF where the charts fill the width of the page but automating that might be a bit of a pain. Any ideas? Splunk version: 8.1.6
we have multiple site, site is n number, if any site fires 9251 that particular site should not be in the list as that site is in network Maintenace mode.any site can fires multiple alerts 9047 OR/AN... See more...
we have multiple site, site is n number, if any site fires 9251 that particular site should not be in the list as that site is in network Maintenace mode.any site can fires multiple alerts 9047 OR/AND 9251 now 9047 is fired from a b c d e 9251 is fired from c d c d fired 9251 as this site or device is under maintenance so A 9047 B 9047 C 9251 9047 D 9251 9047 E 9047 so my output contain only A 9047 B 9047 E 9047 how to write Splunk Queary for the same @
Good day.  I have looked in the community posts and know that there is a daylight savings time bug in some Splunk UF's.  I have not found a definitive answer as to what 8.x.x version resolves it.  We... See more...
Good day.  I have looked in the community posts and know that there is a daylight savings time bug in some Splunk UF's.  I have not found a definitive answer as to what 8.x.x version resolves it.  We have a number of endpoints at 8.2.1.  Is the bug corrected in that version or does it need to be upgraded? Thanks in advance.
Can I use tstats on dbxquery and saved search?
I have configured a Database Input in DB Connect to pull in data from an Oracle view. A sample string from one of the events follows: 2023-02-28 15:40:50.760, AUDIT_TYPE="Standard", OS_USERNAME="Adm... See more...
I have configured a Database Input in DB Connect to pull in data from an Oracle view. A sample string from one of the events follows: 2023-02-28 15:40:50.760, AUDIT_TYPE="Standard", OS_USERNAME="Administrator", TERMINAL="unknown", DBUSERNAME="RACOON", CLIENT_PROGRAM_NAME="SQL Developer", STATEMENT_ID="978", EVENT_TIMESTAMP="2023-02-28 18:40:50.76", ACTION_NAME="ALTER USER", OBJECT_NAME="SPLUNK", SQL_TEXT="ALTER USER "SPLUNK" DEFAULT ROLE "CONNECT","AUDIT_VIEWER"", SYSTEM_PRIVILEGE_USED="SYSDBA", CURRENT_USER="SYS", UNIFIED_AUDIT_POLICIES="ORA_SECURECONFIG" However, when I run this search the fields are not correctly identified: index=oracle_audit sourcetype=ID source=OracleAuditConnection Specifically, what should be fields like TERMINAL, CLIENT_PROGRAM_NAME, and OS_USERNAME (among many others) are not identified as fields. Additionally, the search is picking up values as fields, that should not be fields at all (often from the SQL_TEXT field). For example, "ACTIONS ALTER ON SPLUNK.BAT" is picked up as a field, rather than a value. I can improve the results a little by using the following: index=oracle_audit sourcetype=ID source=OracleAuditConnection | extract pairdelim="\"{,}" However, it still does not correctly identify all the fields. Nor does it work on the more complicated SQL_TEXT field, which may contain quotations and the equals signs at time. What can I do to successfully have all of my fields extracted? Is there any trick I can do, given that I am using DB Connect?
my string is    "abcdxyz|11.2.0000|56|12120|32|1005|15|32|7742|5|54|336|446|203473<"   above string is string in huge log entry , I want to extract above string and then last 4 fields and fro... See more...
my string is    "abcdxyz|11.2.0000|56|12120|32|1005|15|32|7742|5|54|336|446|203473<"   above string is string in huge log entry , I want to extract above string and then last 4 fields and from above string to map for graph. I tried using   (name="*abcdxyz|11.2.0000|[0-9]|[0-9]|[0-9]|[0-9]|[0-9]|[0-9]|[0-9]|[0-9]|[0-9]|[0-9]|[0-9]|[0-9]<*")   but getting a lot of noise there
Hello, Let's say I have many dashboards, inside each dashboard I have 10 base searches, and many visualizations on it. Some users that use the dashboards have "Restricted Rules" on the searches, so... See more...
Hello, Let's say I have many dashboards, inside each dashboard I have 10 base searches, and many visualizations on it. Some users that use the dashboards have "Restricted Rules" on the searches, sometimes by time or by size. My problem is that the end user's that enter the dashboards and choose the time, he can't know that his data that he sees sometimes limited to the rules that apply on him. The normal way to know if the data that you see is limited in each panel is to use the hover small Buttons and click on inspect Button.  I try to make a script that make a popup that aware the user if he got restricted but there must be a better way to achieve a solution for this. I’ve seen that Splunk have built in option in the drop down menu of a search that show that message if he found some.. anyone have an idea how to implement this in script that will affect all dashboards? I will attach the script that I did and some pictures to show what I mean..     require([ 'splunkjs/mvc', "splunkjs/mvc/searchmanager", 'jquery', "/static/app/mce/javascript/popup_Modal.js", "splunkjs/mvc/simplexml/ready!" ], function (mvc, SearchManager, $, Modal) { var registry = mvc.Components; console.log(registry) var envTokenModel = mvc.Components.get('env'); // Grab a specific env token var username = envTokenModel.get('user'); var app = envTokenModel.get('app'); var page = envTokenModel.get('page'); var searchMceJobs = "| search index=_introspection | rename search_id as JobId | join left JobId [| rest /services/search/jobs | rename dispatchState as Status eai:acl.app as App title as Search author as User runDuration as Runtime published as Published id as ID provenance as Provenance | rex field=Provenance \"UI:Dashboard:(?<Dashboard>.+)\" | search Dashboard=*| rex field=ID \"(?<JobId>[^//]*)$\"| eval Status=mvjoin(mvsort(mvdedup(split(mvjoin(Status,\",\"),\",\"))),\",\")| eval Runtime=round(Runtime,1) | where User = " + '"' + username + '"' + " And App = " + '"' + app + '"' + " And Dashboard = " + '"' + page + '"' + " ] | stats values(messages.info) as MSG by JobId User Dashboard App updated | table MSG | sort updated" // Log all env tokens console.log(envTokenModel.toJSON()); // React to env token changes: envTokenModel.on('change', function () { //console.log(arguments); }); setTimeout(function run() { // Create the search manager var mysearch = new SearchManager({ id: 'MceSearch', cache: false, preview: true, search: searchMceJobs, earliest_time: "-10s", latest_time: "now", }); mvc.Components.revokeInstance("MceSearch"); mysearch.on('search:done', function (properties) { console.log(properties.content) console.log(properties.content.resultCount) if (properties.content.resultCount > 0) { // Print the search job properties console.log("DONE!\nSearch job properties:", properties.content.resultCount); oldResult = properties.content.resultCount var myModal = new Modal("popupModal", { title: properties.content.resultCount, backdrop: 'static', keyboard: false, destroyOnHide: true, type: 'normal' }); $(myModal.$el).on("hide", function () { //console.log('test123') // Not taking any action on hide, but you can if you want to! }) myModal.body .append($(` <p>${properties.content.resultCount}</p>`)); myModal.footer.append($('<button>').attr({ type: 'button', 'data-dismiss': 'modal' }).addClass('btn btn-primary').text('ok').on('load', function () { // Not taking any action on Close... but I could! })) myModal.show(); // Launch it! } }); //setInterval(run, 5000); }, 5000); console.log('timeout active') // });      
Hey community, Need your help!!!! We have lot of internal warn logs for DateParserverbose issue in our splunk prod environment despite passing correct values in TIME_FORMAT, TIME_PREFIX and MAX_TIM... See more...
Hey community, Need your help!!!! We have lot of internal warn logs for DateParserverbose issue in our splunk prod environment despite passing correct values in TIME_FORMAT, TIME_PREFIX and MAX_TIMESTAMP_LOOKAHEAD attributes in our props.conf. I have listed down warn logs, sample logs and props.conf for your reference. e.g internal warn log- Failed to parsetimestamp in first MAX_TIMESTAMP_LOOKAHEAD (30) characters of event. Defaulting to timestamp of previous event for sourcetype-test Sample raw event logs: Mar 1 07:31:00 xxxxxxx info-message(time=2023-03-01T07:31:00.137, appname=abc, user=john, server=xxx, port=123, msg=logged in) [] [logger] [https:xxxx] Mar 1 08:29:33 xxxxxxx info-message(time=2023-03-01T08:29:33.135, appname=abc, user=moon, server=yyy, port=897, msg=logged in) [] [logger] [https:xxxx] Below is our props and transforms that is used to ingest only clean & required logs to splunk prod: [sourcetype-test] SHOULD_LINEMERGE = false LINE_BREAKER = (time\=)|\w+\s+\d+\s+\d+:\d+:\d+|\) TIME_PREFIX = ^ TIME_FORMAT = %Y-%m-%dT%H:%M:%S.%3QZ MAX_TIMESTAMP_LOOKAHEAD = 32 TRANSFORMS-test = test_null, test_parsing [test_null] REGEX = . DEST_KEY = queue FORMAT = nullQueue [test_parsing] REGEX = appname DEST_KEY = queue FORMAT = indexQueue Below are the clean log samples that are ingested to splunk as expected but when I check for internal logs for this sourcetype I am seeing lot of warnings for DateParserverbose. So, just wanted to know 1) why there are warn logs when time related settings are correct and is there any way out to fix my props configs to avoid warn logs related to DateParserverbose ? time=2023-03-01T07:31:00.137, appname=abc, user=john, server=xxx, port=123, msg=logged in time=2023-03-01T08:29:33.135, appname=abc, user=moon, server=yyy, port=897, msg=logged in Thanks in advance!!
Hello Splunkers !!   I have 5 file paths which we are monitoring D01A01023(Z+01) -- Data is not coming D01A02023(Z+01) -- Data is coming fine D01A03023(Z+01) -- Data is not coming D01A04023(... See more...
Hello Splunkers !!   I have 5 file paths which we are monitoring D01A01023(Z+01) -- Data is not coming D01A02023(Z+01) -- Data is coming fine D01A03023(Z+01) -- Data is not coming D01A04023(Z+01) -- Data is coming fine D01A05023(Z+01) -- Data is coming fine We have data similar files and logs patterns are same for all the files but even after that logs are coming to Splunk only from 3 files not all the files. In that I have checked inputs.conf, props.conf & transforms.conf all are fine. But Still I am figuring out what more I need to check to troubleshoot this issue.  
Hello, I have a scenario where I need to create a custom column (status) that should be defined based on a criteria. CorrelationID tracePoint 123 START 123 BEFORE REQUEST ... See more...
Hello, I have a scenario where I need to create a custom column (status) that should be defined based on a criteria. CorrelationID tracePoint 123 START 123 BEFORE REQUEST 123 AFTER REQUEST 123 END 456 START 456 BEFORE REQUEST 456 EXCEPTION 789 START 789 AFTER REQUEST   Expected Output: CorrelationID tracePoint Status 123 START SUCCESS 123 BEFORE REQUEST SUCCESS 123 AFTER REQUEST SUCCESS 123 END SUCCESS 456 START ERROR 456 BEFORE REQUEST ERROR 456 EXCEPTION ERROR 789 START UNKNOWN 789 AFTER REQUEST UNKNOWN   Rule: for a given correlationID the status should be set to ERROR if for that correlationId there is a tracePoint=EXCEPRION, should be set to SUCCESS if for that correlationId there is a tracePoint=END and should be set to UNKNOWN if for that correlationID there is no tracePoint=EXCEPTION or tracePoint=END. Can you give me some guidance on how to achieve this scenario? Thanks!  
Hi, I want to write "Sources Sending High Volume DNS Traffic" rule in Splunk. However, the following calculation does not work. The rule does not work true if you do not write this calculation. What ... See more...
Hi, I want to write "Sources Sending High Volume DNS Traffic" rule in Splunk. However, the following calculation does not work. The rule does not work true if you do not write this calculation. What is this calculation for?  and How  can I change this calculation ? | where num_data_samples >=4 AND bytes_out > avg_bytes_out + 3 * stdev_bytes_out AND bytes_out > per_source_avg_bytes_out + 3 * per_source_stdev_bytes_out AND _time >= relative_time(maxtime, "@h")  
Hi Everyone, I have sample logs for Virsec event. Below is a sample event. Mar 1 06:24:05 xxx.xxx.xxx.xxx CEF:1|Virsec Security Platform|Virsec|x.x.x|41|Library Monitoring|10|EventId=VS-NA-030123... See more...
Hi Everyone, I have sample logs for Virsec event. Below is a sample event. Mar 1 06:24:05 xxx.xxx.xxx.xxx CEF:1|Virsec Security Platform|Virsec|x.x.x|41|Library Monitoring|10|EventId=VS-NA-030123-A02447| Server_Name=xxxxxx Incident_Level=ATTACK Incident_Category=FILE_INTEGRITY Incident_Type=Library Monitoring Incident_Timestamp=01 Mar 2023 02:24:05 PM GMT Process Checksum=097ce5761c89434367598b34fe32893b Action=LOG Parameters=cmdline Parent Process Name=cmd.exe Process Threat Verification Status =Safe Process Path=C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe Library Checksum=2cdb991bbbb60eb91c2df5f68e96e8fe Canary No=1220704979 Process Profile Name=xxxxxxxxxxxxxxxxxxxxxx Number of Libraries=1 Library Name=EdrDotNet.UnmanagedLib.dll Start Time=2023-03-01T14:23:58.661-05:00 Process Name=powershell.exe Process Profile Id=494037864 processObjectId=63f35f7a0ac3c9670c943e14 Username=xxxxxxx\xxxxxxx libraryObjectId=63f35f7a0ac3c9670c943e16 Library Path=C:\Windows\System32\EdrDotNet.UnmanagedLib.dll Event Type=New Library for Process Incident Type=Library Injection Process Pid=9632 Type=Library Monitoring Incident Description=Library Monitoring eventTime=2023-03-01T14:23:58.661-05:00 category=FILE_INTEGRITY threatCode=LibraryInjection   I have created a parsing for this using EXTRACT- in props.conf inside a seperate app.   [virsec:library] SHOULD_LINEMERGE=false LINE_BREAKER=([\r\n]+) MAX_TIMESTAMP_LOOKAHEAD=15 TIME_FORMAT=%b %d %H:%M:%S EXTRACT-processpath = Process Path=(?<ProcessPath>.*) Library Checksum EXTRACT-ParentProcessName = Parent Process Name=(?<ParentProcessName>.*) Process Threat EXTRACT-LibraryName = Library Name=(?<LibraryName>.*) Start Time EXTRACT-ProcessName = \d\sProcess Name=(?<ProcessName>.*) Process Profile Id EXTRACT-LibraryPath = Library Path=(?<LibraryPath>.*) Event Type EXTRACT-ProcessProfileName = Process Profile Name=(?<ProcessProfileName>.*) Number of Libraries EXTRACT-ProcessThreatVerificationStatus = Process Threat Verification Status =(?<ProcessThreatVerificationStatus>.*) Process Path   Not sure why the parsing is not working. Can somebody help.  
hai all, need help on regex to extract Major as new field with message whatever after Major   Major SIPCM *SipCmRecvFromTcpSocket: Error in reading data on socketId 247, errno=104 Major NRS A... See more...
hai all, need help on regex to extract Major as new field with message whatever after Major   Major SIPCM *SipCmRecvFromTcpSocket: Error in reading data on socketId 247, errno=104 Major NRS ARP lookup for 216.20.237.19 on interface pkt0 with addrContextId 1 failed: SIOGARP error , error 6 Major LVM *NpMediaYmacRespHdlr: error code 0x3 recvd for bcm cmd 4
I'm using some email alert actions without attachments included. My users aren't technical, so when they click on "view results" in an email the resulting search timeline view within Splunk is overwh... See more...
I'm using some email alert actions without attachments included. My users aren't technical, so when they click on "view results" in an email the resulting search timeline view within Splunk is overwhelming for them. They're dashboard users, not search users. I'm not aware of a good and simple way to basically just not show the search dialog along with the results on the timeline view (e.g. via some savedsearches.conf "view"/"ui" related config option), though I'd love to hear about one. Instead, I thought to send a custom email in the alert such as: Your Splunk report '$name$' is ready. Results can be retrieved at the following link: https://splunk:8000/en-US/app/app/alert_test?search_name=$name$&search_id=$job.sid$ Where the dashboard is basically just this: <row> <panel> <title>Search Name: $search_name$</title> <table> <search> <query>| loadjob $search_id$</query> </search> <option name="count">100</option> <option name="dataOverlayMode">none</option> <option name="drilldown">none</option> <option name="percentagesRow">false</option> <option name="rowNumbers">true</option> <option name="totalsRow">false</option> <option name="wrap">true</option> </table> </panel> </row>   That works, but the problem is that the standard token encoding methods such as $name|u$ don't appear to actually work in alert emails, so spaces and special characters break the link. And, html support for links seems super broken, so I appear to have to send as a text email and allow magic linking to happen, but that doesn't handle the spaces / unencoded name value. I'm hoping for a simple way to deal with this. The only thing I can think of probably involves just dealing with not passing the search name to the dashboard, or getting into cloning the sendemail command to try to make it encode things, and then losing the normal alert configuration UI, etc. This seems like it should be easier to handle all the way around. Why am I forced to push users into a web page that starts with a hideous 40 line search?
We're indexing a set of standard IIS W3C logs into our indexer and have a need to obtain a list of the parent sites for every URL that's been recorded in the logs, and the latest time each has been c... See more...
We're indexing a set of standard IIS W3C logs into our indexer and have a need to obtain a list of the parent sites for every URL that's been recorded in the logs, and the latest time each has been called. I've got a search that will return every URL and the latest time it was recorded in the log but I can't work out how to break the URLs down to their parent and query by that instead. I think I need to use a regex somehow (or maybe some kind of eval?) but I'm not sure. Has anyone done this before?  This is the base search I'm using: index=iis s_contentpath = "/sites/Teams*" | dedup s_contentpath | chart last(_time) as Time by s_contentpath | convert ctime(Time) | sort Time asc I've attached a small sample of the result set. For these examples, I'd be looking to return something like this: Parent URL Time /sites/Teams5/Nat_Man_MLP/ 02/07/2023 10:55:00 /sites/Teams5/connect-HRIS/ 02/07/2023 11:04:26 /sites/Teams5/C-D/ 02/07/2023 12:39:19   Any help would most appreciated.
While i am creating connection with mssql instance, in DB connect i am getting "cannot open database requested by login. The login failed." But DB team can login with the same user and access the sam... See more...
While i am creating connection with mssql instance, in DB connect i am getting "cannot open database requested by login. The login failed." But DB team can login with the same user and access the same instance. What is causing the issue. I am using JTDS driver with windows authentication. i am using db connect 3.11 version which is latest. Any help would be greatly appeciated. Thanks in advance.
we have two separate events which have a common field x-provider-api-correlation-id . In 1st event it is coming as part of HTTP response header and in second api it is coming as part of Http Reque... See more...
we have two separate events which have a common field x-provider-api-correlation-id . In 1st event it is coming as part of HTTP response header and in second api it is coming as part of Http Request Header. My requirement is to extract  start time (_time-(time_to_serve_request/1000), endtime which is _time from these two separate events  based of x-provider-api-correlation-id which is having same value .        
how to perform splunk subsearch through splunk java SDK
Hi All Is splunk universal Forwarder version 9.0.4.0 supported on Windows Server 2012 R2?
Hi all, I noticed a problem with last version of your iOS agent framework (2023.1.0). There is an incompatibility with Alamofire that cause a runtime crash during a network request. I had to downgr... See more...
Hi all, I noticed a problem with last version of your iOS agent framework (2023.1.0). There is an incompatibility with Alamofire that cause a runtime crash during a network request. I had to downgrade to version 2022.5.0 to solve the problem.