All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello, I wanna download MS Exchange app but it's not available via https://splunkbase.splunk.com/app/1660 Where can I find version 4.0.4 of this app ?
Hi Splunk Works, For App https://splunkbase.splunk.com/app/3757/ To pull in non-default AAD User fields such as companyName and country we modified input_module_MS_AAD_user.py and works fine. h... See more...
Hi Splunk Works, For App https://splunkbase.splunk.com/app/3757/ To pull in non-default AAD User fields such as companyName and country we modified input_module_MS_AAD_user.py and works fine. https://docs.microsoft.com/en-us/graph/api/resources/user?view=graph-rest-1.0 (see properties section) Would like to see this as a feature for users and devices in next versions if possible please.  
Hello! We currently have two separate alerts. One that prints a list of devices and another that prints a list of records related to those devices (I used the map command to iterate over the list o... See more...
Hello! We currently have two separate alerts. One that prints a list of devices and another that prints a list of records related to those devices (I used the map command to iterate over the list of devices to print the list of records for each device). So currently we get two emails, one right after the other. The first has the list of devices and the second has the records for those devices. Is there a way to print the list of devices and the list of all their records right below in a single email?
Anyone else ever see a 500 bad request error on the Splunk Enterprise logon page? If I clear my cookies out for the logon page and refresh that fixes the issue for a few days or longer but then it ... See more...
Anyone else ever see a 500 bad request error on the Splunk Enterprise logon page? If I clear my cookies out for the logon page and refresh that fixes the issue for a few days or longer but then it appears again. Anyone know how to fix it? Thanks in advance. Ben
Hi, I am using the network diagram viz and I need to change the color of the nodes which also have values for one specific field "SSh". Here is my current Splunk query  index=fraud_glassbox (s... See more...
Hi, I am using the network diagram viz and I need to change the color of the nodes which also have values for one specific field "SSh". Here is my current Splunk query  index=fraud_glassbox (sourcetype="gb:hit" OR sourcetype="gb:sessions") 44ead780-cf74-11ec-915e-005056b040ae | eval time_epoch = strptime('SESSION_TIMESTAMP', "%Y-%m-%d %H:%M:%S") | convert ctime(time_epoch) as hour_minute timeformat="%Y-%m-%d %H:%M" | eval SEQUENCEto = tonumber(SEQUENCE) + 1 | strcat URL_PATH ":" SEQUENCE from | autoregress from as to | eval color = "red" | table from, to, color,Premier_RC_Code_SSH and output:   Is it possible to incorporate an IF-like statement or subsearch that would turn all such nodes blue if there is respective values for "SSH" field?
I have a sourcetype the provides results for dst if it has one result or dst{} with multiple results. I am attempting to get this into a data model to be used; however I can't get dst{} to work. ... See more...
I have a sourcetype the provides results for dst if it has one result or dst{} with multiple results. I am attempting to get this into a data model to be used; however I can't get dst{} to work. dst=dest works just fine, but dst{}=dest does not work. When doing dst{}= (IP address), the search works just fine. So I know it doesn't have an issue finding the information. I am missing something for what is needed to make it work within a data model. After researching for a couple days and failing, I thought I'd ask the community for their knowledge.
I'm having some issues getting my LINE_BREAKER configuration to work for a custom log file. I've tested the RegEx and it matches the beginning of every line, however it's still breaking extremely str... See more...
I'm having some issues getting my LINE_BREAKER configuration to work for a custom log file. I've tested the RegEx and it matches the beginning of every line, however it's still breaking extremely strangely. Here's the configuration we're running as well as a sample of the log. The screenshot at the bottom is what it's actually doing.     MAX_TIMESTAMP_LOOKAHEAD = 20 TIME_FORMAT = %Y-%m-%d_%I%M %p TIME_PREFIX = ^ TZ = MST SHOULD_LINEMERGE = false LINE_BREAKER = ([\r\n]+)\d{4}-\d{2}-\d{2}_\d{4} [A|P]M[\s\r\n]+\d{2} --- 2022-05-10_1120 AM 10.12.14.3 HSM device 0: HSM in NORMAL MODE. RESPONDING. Usage Level=0% 2022-05-10_1120 AM 10.12.14.4 HSM device 0: HSM in NORMAL MODE. RESPONDING. Usage Level=0% 2022-05-10_1120 AM 10.12.14.5 HSM device 0: HSM in NORMAL MODE. RESPONDING. Usage Level=0% 2022-05-10_1120 AM 10.12.14.81 HSM device 0: HSM in NORMAL MODE. RESPONDING. Usage Level=58% 2022-05-10_1120 AM 10.12.14.82 HSM device 0: HSM in NORMAL MODE. RESPONDING. Usage Level=73% 2022-05-10_1120 AM 10.12.14.88 HSM device 0: HSM in NORMAL MODE. RESPONDING. Usage Level=0% 2022-05-10_1120 AM 10.12.14.91 HSM device 0: HSM in NORMAL MODE. RESPONDING. Usage Level=0%          
I have a Windows .ini file that I am wanting to index on every update of the file. Right now when the file is updated it is not being re-indexed. The file doesn't have much data in it ... just about ... See more...
I have a Windows .ini file that I am wanting to index on every update of the file. Right now when the file is updated it is not being re-indexed. The file doesn't have much data in it ... just about 1K worth of data. Whenever the file is updated not much of the file is changed ... mostly just a couple values referencing the build # for the application it goes with. Ideally, I would like the whole file to be re-indexed every time any change is made to the file. Anyone tried this or have thoughts on it. I guess if all else fails I could do a scripted input on a schedule and do it that way, but that would mean I would not get the updates right away and I would also get lots of useless data since most of the scheduled polls would have no change.
Could someone help me with the Splunk configuration so that the following events show independently in the Splunk search?     [my_sourcetype] MAX_TIMESTAMP_LOOKAHEAD = 30 SHOULD_LINEMERG... See more...
Could someone help me with the Splunk configuration so that the following events show independently in the Splunk search?     [my_sourcetype] MAX_TIMESTAMP_LOOKAHEAD = 30 SHOULD_LINEMERGE = false LINE_BREAKER = ([\r\n]+) TIME_PREFIX = TIME_FORMAT =        
Hi all! I'm trying to create a table with case_number and session as the two columns.  Any event without a case_number won't show up in the table. How do I get them to show up?    index=cui botId... See more...
Hi all! I'm trying to create a table with case_number and session as the two columns.  Any event without a case_number won't show up in the table. How do I get them to show up?    index=cui botId=123456789 case_number=* session=* | table case_number session    I tried using | fields case_number instead, but this didn't work either.  Appreciate any help! 
Hi, I have a number of raw logs that I need to extract some fields from. When I go to "Event Actions" and then "Extract Fields", I normally get the following: However, I am dealing with ... See more...
Hi, I have a number of raw logs that I need to extract some fields from. When I go to "Event Actions" and then "Extract Fields", I normally get the following: However, I am dealing with a number of logs for one index where I get this instead and I cannot extract anything: How can I extract fields in this case? Thanks, Patrick
Hi, I don't know is this question was previously addressed by the users who asked about multi-stage Sankey diagrams or user flow displaying (classical marketing scenario of web-users navigating in a ... See more...
Hi, I don't know is this question was previously addressed by the users who asked about multi-stage Sankey diagrams or user flow displaying (classical marketing scenario of web-users navigating in a webshop from a start page to the final cart page and spotting the drop-out locations). It is though a valid diagramming scenario and has been made very popular by various analytics platform like Teradata or Qlik and has been also named as Path Sankey by various developers (https://github.com/DaltonRuer/PathSankey). The QlikSense implementations are also based on modified versions of d3.js similar to the Splunk app. The closest request that someone posted here is maybe here https://community.splunk.com/t5/All-Apps-and-Add-ons/How-to-search-and-aggregate-user-behavior-data-in-a/m-p/482333 (since I am a beginner Splunker it migt be the same topic) Basically it extends the two-layers source-target concept of standard d3.js to source-multiple_layers-target. The best example could be seen here https://bl.ocks.org/jeinarsson/e37aa55c3b0e11ae6fa1 and one can imagine that the number of layers / nodes can be actually limited only by the CPU power and RAM (although javascript limitations exist in almost all browsers). A practical example (from my field of interest)) would be this: Suppose that we have a hospital with five units thorough which the patient must pass (not mandatory through all of them) and we want to see the patient referral flow between the doctors from this units; we would have for example 1000 patient IDs and for each of them we would have various flows based on referral from the first unit doctor to the last one he sees (of course not necessarily in the alphabetical order and not always five referrals) so we would display 5 layers in the Sankey chart, each layer displaying in a vertical manner the corresponding doctor names of the unit as nodes with a node thickness according to the number of incoming links from the previous linking nodes equal to count(patiend_id). It would be the same as https://bl.ocks.org/jeinarsson/e37aa55c3b0e11ae6fa1 but with 5 layers and an variable number of nodes according to the inputlookup set. If anybody knows a way how to tweak the current Sankey app search  | inputlookup referrals.csv | stats count(patient_id) by 1st_Reffering_Layer 2nd_Referring_Layer ---maybe ??--- | stats count(patient_id) by 2st_Reffering_Layer 3rd_Referring_Layer ???   | stats count(patient_id) by 3rd_Reffering_Layer 4th_Referring_Layer ??? | stats count(patient_id) by 4th_Reffering_Layer 5th_Referring_Layer If the solution from https://community.splunk.com/t5/All-Apps-and-Add-ons/How-to-search-and-aggregate-user-behavior-data-in-a/m-p/482333 is exactly what I am asking for above please advise. Thank you
Hey Splunkers!!! We are planning to deploy DB connect app to get the data from Oracle Database and for the same I have below queries related to Splunk DBConnect App. Please assist. 1) Can we incr... See more...
Hey Splunkers!!! We are planning to deploy DB connect app to get the data from Oracle Database and for the same I have below queries related to Splunk DBConnect App. Please assist. 1) Can we increase data fetch limit from 300 to 1000 or any other higher value? fetch_size = <integer> # optional # The number of rows to return at a time from the database. The default is 300. 2) Will this increased fetch size in db_inputs.conf will effect the Database Performance? 3) Does this fetch limit differs depending upon the databases? 4) If we are having existing scripts to get the same information using some other tool, then what will be the advantages of having SplunkDB Connect App over it?
We have a setup where the AWS KMS logs are sent to Splunk HEC through below flow. We are getting JSON event format but don't see the data to be doing necessary field aliasing and tagging on SH to be ... See more...
We have a setup where the AWS KMS logs are sent to Splunk HEC through below flow. We are getting JSON event format but don't see the data to be doing necessary field aliasing and tagging on SH to be CIM compatible for Authentication & Change Datamodel. I already installed Splunk_TA_aws on both SH & HF. KMS -> Kinesis Firehosse -> logstash function -> Splunk HEC (using aws:cloudtrail sourcetype) Should I be using a different source type for this data source and data format sent through my flow? Can anyone advise if worked with KMS data for AWS.
Hello All,  After installing IT essentials Work app in Splunk from Apps dropdown  I am getting the below error while trying to launch this app.  Although whenever I am trying to hit the URL ... See more...
Hello All,  After installing IT essentials Work app in Splunk from Apps dropdown  I am getting the below error while trying to launch this app.  Although whenever I am trying to hit the URL individually for IT essentials Work App Entity management , infrastructure overview tabs are working fine and data is coming up while hitting the individual URLs like entity management , infrastructure overview .etc.  Could you please help here with insights why ITE work app is not launching while trying to start it from Apps drilldown?   Thanks.
Hello.  Our organization has one of our Data Model (DM) searches for ES regularly running over 200 seconds to complete.  Soon another source will be added to the DM, so I have been looking for way... See more...
Hello.  Our organization has one of our Data Model (DM) searches for ES regularly running over 200 seconds to complete.  Soon another source will be added to the DM, so I have been looking for ways to reduce the runtime.   I came across a site that suggested the macros that build the CIM DM's could be faster by adding the sourcetype alongside the index in the search.  My thought was, "Why stop there?"  You could add the source too, as long as it doesn't iterate with a date or some random number scheme.  And even then, with reasonable wildcarding at the end I believe there would be a performance improvement. I was told that this effort is unnecessary, even though in my unofficial tests over the same period, I found my modified searches to nearly twice as fast.  So aside from the additional effort to build those searches and maintain them, why is this unnecessary?   Thanks AJ
Hi Community, I have the need to filter data based on a specific field value and route to a different group of indexers. Data is coming through HEC configured on a Heavy Forwarder like this:   ... See more...
Hi Community, I have the need to filter data based on a specific field value and route to a different group of indexers. Data is coming through HEC configured on a Heavy Forwarder like this:   [http://tokenName] index = main indexes = main outputgroup = my_indexers sourcetype = _json token = <string> source = mysource   I'd like to use props.conf and transforms.conf as suggested here like this:   props.conf [source::mysource] TRANSFORMS-routing=otherIndexersRouting transforms.conf [otherIndexersRouting] REGEX=\"domain\"\:\s\"CARD\" DEST_KEY=_TCP_ROUTING FORMAT=other_indexers   In outputs.conf I'd add the stanza [tcpOut:other_indexers]   Is this possible? Is there another way to achieve this goal?   Thank you Marta
Would like a way to create a drop down with add and remove choices that will then remove or add the user from the lookup table. So far I have: <input type=“dropdown” token=“dropdown_tok” searchWhen... See more...
Would like a way to create a drop down with add and remove choices that will then remove or add the user from the lookup table. So far I have: <input type=“dropdown” token=“dropdown_tok” searchWhenChanged=“false”> <label>Action</label> <choice value=“add”>Add</choice> <choice value=“remove”>Remove</choice> <choice value=“reauthorize”>Add</choice> <search> <query> </query> </search> any help would be great!
hello,   Q1 While configuring #splunk_itsi KPI, under the thresholding section there is an option to Enable KPI Alerting. As below, the notable event is created when the severity changes from ... See more...
hello,   Q1 While configuring #splunk_itsi KPI, under the thresholding section there is an option to Enable KPI Alerting. As below, the notable event is created when the severity changes from any lower level to critical.   My question is that if there is a way to trigger a notable event, when the status is critical, regardless of the state it was before. In other words, when the severity remains critical from the 1st check point to the second check point, i need a notable event to be created in this case as well, is that possible ?.   Q2 After configuring #splunk_itsi correlation search as described here , i wasn't able to see notable events created in the episode review. I have already configured the search in the correlation search, and added associated services, so the final search is as below: index="itsi_summary" kpi IN ("SH * RAM Static","SH * CPU Adaptive","SH * CPU Static","SH * RAM Adaptive","SH * SWAP") alert_level>1 | `filter_maintenance_services("400f819c-f739-4ffc-a25c-86d48362fef8,917c4030-a422-4645-851e-a5b2b5c7f3cd,7fb610b4-15f2-4d21-b035-b4857c9effef,28aa0103-fb41-4382-ab07-c637c16d3d85,bfe94d80-daf5-43b8-8318-dc881fd30128,b3c8562a-d1d6-465a-b0c7-4a28ba7f4612,225e7eb6-2f7c-4f0f-9221-75b1e8471053,a0826af0-2100-44a4-9b51-558bff966bb7,dcb38bc4-e930-4776-92a8-5de0d50cdc5e,721cb2c5-43fa-4419-9dde-a33a467d7770,328b9170-18d3-4b50-9968-01b1e087f955")` When i run the search it returns the events, so i am not expecting something wrong in the search query. What am i missing, in order to get the notable events visible in the episodes review tab?   Appreciate your help.