All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

One of my dashboard panel is not showing any results. But when I run the search manually it is giving results. Out of 8 panels only one panel is having the issue. I am not using base searches.
Yesterday i was ingest new server in my Splunk in my case, in directory /opt/splunkforwarder/etc/system/local/inputs.conf  im use setting like this  [monitor:///var/log/] disabled = false index = <... See more...
Yesterday i was ingest new server in my Splunk in my case, in directory /opt/splunkforwarder/etc/system/local/inputs.conf  im use setting like this  [monitor:///var/log/] disabled = false index = <NewIndex> [monitor:///home/*/.bash_history] disabled = false index = <NewIndex> sourcetype = bash_history  I ingest 6 Server Ubuntu, in the first 4hour i got too much data like 1GB (got shocked cause it only 4hours) but after 2 Days it only get 4.88GB.  What i understand is maybe in the first 4hour it read all old data cache fom .bash_history and /var/log (maybe) because when i check it in Indexer it says Earliest Event = 15 years ago Question is, it is normal or need to change in my inputs.conf ?  ~Danke  
Hi there,  I was ingest new server to new index (Ubuntu with UF)  Let say my index is index=ABC  I want to connect it in Datamodel, unfortunately im not the first who was create it. And when i che... See more...
Hi there,  I was ingest new server to new index (Ubuntu with UF)  Let say my index is index=ABC  I want to connect it in Datamodel, unfortunately im not the first who was create it. And when i check it i got error "This object has no explicit index constraint. Consider adding one for better performance."  And when i check it in macros `cim_Endpoint_indexes` it only show ()  When i want to add my new index in that macros i got this 500 server error  According from this question : https://community.splunk.com/t5/Knowledge-Management/Adding-index-to-accelerated-CIM-datamodel/m-p/586847#M8722 it said 2 solution : if you don't rebuild the DataModel, Splunk will start to add logs from that index when you  save the macro and old events aren't added to the Datamodel, only the new ones, if you rebuild the DataModel, Splunk will add to the DataModel all the events in all indexes contained in the macro until the retention period (e.g. Network Traffic 1month, Authentication 1 year, and so on). Since i know it cannot add from macros, i create new Eventtype and Tag for my new index. And that Eventtype also in Tag like this  Eventtype Tag eventtype=ABC_endpoint_event tag=endpoint, tag=asset, tag=network eventtype=ABC_process_event tag=process, tag=endpoint eventtype=ABC_network_event tag=network, tag=communication eventtype=ABC_security_event tag=security, tag=endpoint   One from base search in Datamodel Endpoint is using tag=process  (`cim_Endpoint_indexes`) tag=process tag=report | eval process_integrity_level=lower(process_integrity_level) From that query it calling tag=process  But when i try to running it, it don't show my new index.  Anyone can help me to solving this issue ?  ~Danke  
First, the regular expression in the rex command must be enclosed in quotation marks. Second, you're being caught by rex's escape trap.  Embedded quotation marks must be escaped, but the multiple le... See more...
First, the regular expression in the rex command must be enclosed in quotation marks. Second, you're being caught by rex's escape trap.  Embedded quotation marks must be escaped, but the multiple levels of parsing in SPL call for 3 escape characters. | rex ", \\\\\"appStatus\\\\\":\\\\\"(?<status>\w+\s\w+)\\\\\""
Hi @smanojkumar, This is a 16-bit adaptation of Gaudet's algorithm from Hacker's Delight Second Edition (Warren, 2013): | makeresults | eval HEX_Code="0002" ``` convert to number ``` | eval x=tonum... See more...
Hi @smanojkumar, This is a 16-bit adaptation of Gaudet's algorithm from Hacker's Delight Second Edition (Warren, 2013): | makeresults | eval HEX_Code="0002" ``` convert to number ``` | eval x=tonumber(HEX_Code, 16) ``` swap bytes ``` | eval x=bit_shift_right(x, 8)+bit_and(bit_shift_left(x, 8), 65280) ``` calculate number of trailing zeros (ntz) ``` | eval y=bit_and(x, 65535-x+1) | eval bz=if(y>0, 0, 1), b3=if(bit_and(y, 255)>0, 0, 8), b2=if(bit_and(y, 3855)>0, 0, 4), b1=if(bit_and(y, 13107)>0, 0, 2), b0=if(bit_and(y, 21845)>0, 0, 1) | eval ntz=bz+b3+b2+b1+b0 ``` ntz=9 ```  
With following search getting error as Missing Closing Parenthesis in splunk. tried same rex in regex101 it was working. index=digitalguardian "appStatus" |rex ,\\"appStatus\\":\\"(?<status>\w+\s\... See more...
With following search getting error as Missing Closing Parenthesis in splunk. tried same rex in regex101 it was working. index=digitalguardian "appStatus" |rex ,\\"appStatus\\":\\"(?<status>\w+\s\w+)\\"   2024-02-21 {\"callCenterrecontactevent\":{\"customer\":{\"id\":\"6ghty678h\", \"idtypecd\":\"connect_id\"}, \"languagecd\":\"eng\",\"vhannelInstance\":: {\"status\":{\"serverStatusCode\":\"400\",\"severity\":\"Error\",\"additionalStatus\":[{\"statusCode\":400, \"appStatus\":\"Schema Validation\",\"serverity\":\"Error\"  
Hi @Jakfarh, How did you identify the corruption? The rebuild command regenerates tsidx and metadata files from a valid rawdata directory. By default, metrics indexes have metric.stubOutRawdataJou... See more...
Hi @Jakfarh, How did you identify the corruption? The rebuild command regenerates tsidx and metadata files from a valid rawdata directory. By default, metrics indexes have metric.stubOutRawdataJournal = true, and the rawdata journal is truncated when the bucket rolls from hot to warm. The documentation stresses this point: Caution: Because setting this attribute to "true" eliminates the data in the rawdata files, those files can no longer be used in bucket repair operations. After this occurs, the metrics index bucket is comprised of only the tsidx and metadata files, the loss of which should be mitigated by an appropriate clustering configuration (which disables metric.stubOutRawdataJournal) or a backup solution. If the tsidx or metadata files are corrupt, you'll need to either address the corruption at the file system or disk level or restore a copy from a backup.
Hi @beano501, From the ES documentation at https://docs.splunk.com/Documentation/ES/7.3.2/Admin/Uploadthreatfile: Parsing STIX documents of version 2.0 and version 2.1 parses STIX observable object... See more...
Hi @beano501, From the ES documentation at https://docs.splunk.com/Documentation/ES/7.3.2/Admin/Uploadthreatfile: Parsing STIX documents of version 2.0 and version 2.1 parses STIX observable objects such as type: "observed-data" from the threat intelligence document as outlined in the collections.conf configuration file. The STIX pattern syntax used in STIX "indicator" objects and elsewhere is not currently supported. It's implied the parser expects observed-data objects and then reads observable-container objects from the child objects property. It's explicitly stated that pattern syntax is not supported. This is confirmed in $SPLUNK_HOME/etc/apps/SA-ThreatIntelligence/bin/parsers/stix2_parser.py (not shown), where we can see the parser expects the deprecated objects property inside an observed-data object in both STIX 2.0 and STIX 2.1 documents. We probably want something like this: { "type": "bundle", "id": "bundle--50ea61e5-7cce-4a72-a876-bfe45793d235", "spec_version": "2.0", "objects": [ { "type": "threat-actor", "id": "threat-actor--840bb5cd-af46-4c45-9489-43f7bfe612b8", "created": "2023-09-08T00:02:39.000Z", "modified": "2023-09-08T00:02:39.000Z", "name": "Bad Guys", "description": "No, really. They are bad guys.", "labels": [ "uncategorized" ] }, { "type": "observed-data", "id": "observed-data--110847c9-a492-4491-883f-0cea407bb6b1", "created": "2023-09-08T00:02:39.000Z", "modified": "2023-09-08T00:02:39.000Z", "first_observed": "2023-09-08T00:02:39.000Z", "last_observed": "2023-09-08T00:02:39.000Z", "number_observed": 1, "objects": { "0": { "type": "ipv4-addr", "value": "101.38.159.17" } } } ] } For more information about which properties are mapped from the nested object to the ip_intel collection, see the cited collections.conf file at $SPLUNK_HOME/etc/apps/DA-ESS-ThreatIntelligence/default/collections.conf: # STIX2 Mappings to ip_intel # * <collection_field> : <observable-type>.<observable-object-field> - <observable-reference-type>.<reference-object-field> # # * ip : ipv4-addr.value # * : ipv6-addr.value # * domain : domain-name.value # * address : None # * city : None # * country : None # * postal_code : None # * state_prov : None # * oranization_name : None # * organization_id : None # * registration_time : None # * description : None # * threat_key : <id of root element>|<simple filename> # * time : source_processed_time from threat_group_intel # * weight : Parsed from the stanza if downloaded, or required input from user when uploaded # * updated : None # * disabled : false The full list of supported STIX 2.x observed-data objects is: email-message => email_intel ipv4-addr => ip_intel ipv6-addr => ip_intel domain-name => ip_intel file => file_intel network-traffic (with http-request-ext extension) => http_intel process => process_intel process (with windows-service-ext extension) => service_intel windows-registry-key => registry_intel user-account => user_intel x509-certificate => certificate_intel
@VatsalJagani  If I am trying to implement within a splunk app, then would I be able to use: https://splunkui.splunk.com/Packages/react-ui/Overview or do I need to use: https://docs.splunk.com/Docume... See more...
@VatsalJagani  If I am trying to implement within a splunk app, then would I be able to use: https://splunkui.splunk.com/Packages/react-ui/Overview or do I need to use: https://docs.splunk.com/Documentation/Splunk/latest/AdvancedDev/CustomVizTutorial
Nice job. I'm stil not convinced it's how it was supposed to work
@VijaySrrie- It seems you have Proxy issue or Proxy SSL issue. App's GitHub page's Configuration section describe solution to this Proxy or SSL problem. https://github.com/CrossRealms/Splunk-App-Aut... See more...
@VijaySrrie- It seems you have Proxy issue or Proxy SSL issue. App's GitHub page's Configuration section describe solution to this Proxy or SSL problem. https://github.com/CrossRealms/Splunk-App-Auto-Update-MaxMind-Database   I hope this helps!!! Kindly upvote if this helps!!!
@tomapatan- I'm not 100% sure on what are you trying to do but what I can say is, you probably might not need JS file. Simple XML dashboard can do it without need of JS code. This is just another ex... See more...
@tomapatan- I'm not 100% sure on what are you trying to do but what I can say is, you probably might not need JS file. Simple XML dashboard can do it without need of JS code. This is just another example to explain you the usage. This example shows token on the Dropdown filter, but token on Table or Chart drilldown (on-click) would work the similar way. I'll put the reference doc below - https://docs.splunk.com/Documentation/SplunkCloud/latest/Viz/PanelreferenceforSimplifiedXML#drilldown <form> <label>dropdown</label> <fieldset submitButton="false"> <input type="dropdown" token="tkn_number"> <label>field1</label> <default>3</default> <fieldForLabel>count</fieldForLabel> <fieldForValue>count</fieldForValue> <search> <query>| makeresults count=10 | streamstats count</query> <earliest>-24h@h</earliest> <latest>now</latest> </search> <change> <condition match="'value'==&quot;3&quot;"> <set token="tkn_show">true</set> </condition> <condition> <unset token="tkn_show"></unset> </condition> </change> </input> </fieldset> <row depends="$tkn_show$"> <panel> <table> <search> <query>index="_internal" |stats count by sourcetype</query> <earliest>-24h@h</earliest> <latest>now</latest> </search> <option name="drilldown">none</option> </table> </panel> </row> </form>   I hope this helps!!! Kindly upvote if it does!!
@dionrivera- I've no personal experience with this App but I see it just uses HEC data input. And this should be allowed with new Cloud Victoria Experience stack. So, please reach out to Cloud suppo... See more...
@dionrivera- I've no personal experience with this App but I see it just uses HEC data input. And this should be allowed with new Cloud Victoria Experience stack. So, please reach out to Cloud support for App installation support and check with your support representative to see if he/she can fetch you more details about support on Cloud.   I hope this helps!!!
This is a bit theoretical, please can you give a concrete example of what your dashboard would look like?
@wmw- Please try removing extra slash (path separator) at the front. <form version="1.1" script="common_ui_util:js/close_div.js">   I hope this helps!!! If it does kindly upvote and accept the ans... See more...
@wmw- Please try removing extra slash (path separator) at the front. <form version="1.1" script="common_ui_util:js/close_div.js">   I hope this helps!!! If it does kindly upvote and accept the answer!!
I have my first query which creates a list of application names that are then displayed  in multiple single value fields.  This value fields are in the first column of a larger table. | where count=... See more...
I have my first query which creates a list of application names that are then displayed  in multiple single value fields.  This value fields are in the first column of a larger table. | where count=1 | fields app   In the rest of the columns I need to put a single value field with the compliance rate of that application across multiple metrics.   What I'm looking to do is set a variable per low on data load that would allow me to ensure I pull the right compliance number for the application name.   My original idea was to hard code the compliance visualization to search for a specific application name.  However if the list of applications is to change the metric will not match the name.  So how does one set a variable on search load to be used by other visualization 
@icecreamkid98- Yes there might be a newer approach that you should choose.   If you just want to build a custom visualization inside XML dashboard https://docs.splunk.com/Documentation/Splunk/la... See more...
@icecreamkid98- Yes there might be a newer approach that you should choose.   If you just want to build a custom visualization inside XML dashboard https://docs.splunk.com/Documentation/Splunk/latest/AdvancedDev/CustomVizTutorial https://dev.splunk.com/enterprise/docs/developapps/visualizedata/displaydataview/splunkplatformcustom/   But if you want full dashboard with everything being controlled by JS, then build it in new React Framework https://splunkui.splunk.com/Packages/react-ui/Overview   I hope this helps!!
In Splunk, the error message "The search you requested could not be found" typically indicates an issue related to accessing or locating a saved search or search job. Here are some common reasons for... See more...
In Splunk, the error message "The search you requested could not be found" typically indicates an issue related to accessing or locating a saved search or search job. Here are some common reasons for this error and possible solutions: 1. Expired Search Jobs Saved search jobs in Splunk have a Time to Live (TTL) value, after which they expire and are deleted. If the job you are trying to access has already expired, Splunk will display this error. Solution: Try rerunning the search or adjusting the TTL of the search job for future cases.   When you open the saved search, you'll find a section called 'Job Settings.' Inside that, there's an option labeled 'lifetime,' which allows you to set the duration to either 10 minutes or 7 days. This might be useful for what you're trying to achieve.   Refer this link for more info :  https://docs.splunk.com/Documentation/Splunk/9.2.1/Search/Extendjoblifetimes 2. Job ID Not Found If you're trying to view a specific search job using its ID (e.g., via the URL or search job history), the job might not exist anymore, or the ID could be incorrect. Solution: Double-check the job ID or re-run the search to generate a new job ID. 3. Permissions or Access Issues The saved search might have been moved, renamed, or deleted, or you may not have the necessary permissions to view it. Solution: Verify that you have the correct permissions to access the saved search and ensure that it still exists. 4. Corrupted Search Job In rare cases, search jobs might become corrupted or incomplete, causing Splunk to fail when trying to load the search results. Solution: If possible, rerun the search to create a fresh job. 5. App Context Change If a saved search was created in a different app context (e.g., in one Splunk app and you're trying to access it from another), Splunk might not be able to find the search. Solution: Switch to the app where the search was created, or ensure the search is shared across apps. 6. Search Scheduling Conflicts If the saved search is scheduled and there was an issue during one of its scheduled runs, Splunk might show this error if it can't retrieve the job. Solution: Review the schedule settings or try manually running the search to confirm it works.   ------ If you find this solution helpful, please consider accepting it and awarding karma points !!
The search you requested could not be found. The search has probably expired or been deleted. Clicking "Rerun search" will run a new search based on the expired search's search string in the expire... See more...
The search you requested could not be found. The search has probably expired or been deleted. Clicking "Rerun search" will run a new search based on the expired search's search string in the expired search's original time period. Alternatively, you can return back to Splunk.
Thank you all very much for the help, so the issue was related to the solution @ITWhisperer gave. In my search I was referencing the table for the lookup, that should have been the definition that I ... See more...
Thank you all very much for the help, so the issue was related to the solution @ITWhisperer gave. In my search I was referencing the table for the lookup, that should have been the definition that I created.   Thanks again for all the help.