All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @whitecat001 , you could try with something like this: index=your_index | stats latest(_time) AS _time BY Account_name if you don't like to use the _time field, but you want to rename it, remem... See more...
Hi @whitecat001 , you could try with something like this: index=your_index | stats latest(_time) AS _time BY Account_name if you don't like to use the _time field, but you want to rename it, remember that _time is in epochtime and that's automaticay displayed in Human readable, if you rename, you have aso to convert in Human Readable format. index=your_index | stats latest(_time) AS latest BY Account_name | eval latest=strftime(latest),"%Y-%m-%d %H:%M:%S") Ciao. Giuseppe
Hi @AtherAD , the connection_host parametes is useful to define the way to associate the host (ip or dns), youcannot use it to assign an host. In addition, you cannot assign multiple hostnames to a... See more...
Hi @AtherAD , the connection_host parametes is useful to define the way to associate the host (ip or dns), youcannot use it to assign an host. In addition, you cannot assign multiple hostnames to an input but only one at a time (eventually using host, not connection_host). You could try to use the connection_host parameter in your input as described at https://docs.splunk.com/Documentation/Splunk/9.2.1/Admin/Inputsconf#UDP_.28User_Datagram_Protocol_network_input.29 : connection_host = [ip|dns|none] * "ip" sets the host to the IP address of the system sending the data. * "dns" sets the host to the reverse DNS entry for IP address of the system that sends the data. For this to work correctly, set the forward DNS lookup to match the reverse DNS lookup in your DNS configuration. * "none" leaves the host as specified in inputs.conf, typically the Splunk system hostname. * If the input is configured with a 'sourcetype' that has a transform that overrides the 'host' field e.g. 'sourcetype=syslog', that takes precedence over the host specified here. * Default: ip  in your case: [udp://514} sourcetype = firewall_logs connection_host = dns disabled = 0 acceptFrom = 192.168.1.*, 192.168.1.* Ciao. Giuseppe  
Hi. QUESTION: is there a method/configuration to fully align a UF with the Deployment Server? Let me explain: DS ServerX has 3 addons configured, addon#1 + addon#2 + addon#3 UF on ServerX Recei... See more...
Hi. QUESTION: is there a method/configuration to fully align a UF with the Deployment Server? Let me explain: DS ServerX has 3 addons configured, addon#1 + addon#2 + addon#3 UF on ServerX Receives perfectly addon#1 + addon#2 + addon#3 Now, a user enter root in ServerX and create his own custom addon inside UF, addon#4. Now ServerX has addon#1 + addon#2 + addon#3 (DS) + addon#4 (custom created by user) Is there a way to tell DS: maintain ONLY addon#1 + addon#2 + addon#3 and DELETE ALL OTHER CUSTOM ADDONS (addon#4 in this example)? Thanks.
Hello, I have created a new role but i noticed that the users who i have assigned that role get an "error occurred while rendering the page template" When they click the fields option under knowledg... See more...
Hello, I have created a new role but i noticed that the users who i have assigned that role get an "error occurred while rendering the page template" When they click the fields option under knowledge. I looked at the capabilities but cant seem to find the right one that provides access to fields.     
Normally I would not propose to ignore built-in structured data.  But in this case, you can probably take a shortcut if you are not interested in data fields inside that JSON blob at all. index="os"... See more...
Normally I would not propose to ignore built-in structured data.  But in this case, you can probably take a shortcut if you are not interested in data fields inside that JSON blob at all. index="os" host="abcd*" source="/opt/os/*/logs/*" "implementation:abc-field-flow" (("TargetID":"abc" "Sender":"SenderID":"abc") OR ("status": "SUCCESS")) | rex "CORRELATION ID :: (?<correlation_id>\S+)" | eval success_id = if(searchmatch("COMPLETED"), correlation_id) | eventstats values(success_id) as success_id by correlation_id | where correlation_id = success_id Here, I observe that status SUCCESS is a subset of COMPLETED.  If that's not the case, you can also use searchmatch("\"status\": \"SUCCESS\""). But if you want to utilize data fields inside JSON, it could be better to use MessageIdentifier instead, depending on the ratio between success and failure.
Your original idea of lookup should work.  Assuming your loadjob gives you a field named account_number, and that your lookup has a column account_number, you can do this   ``` search above gives a... See more...
Your original idea of lookup should work.  Assuming your loadjob gives you a field named account_number, and that your lookup has a column account_number, you can do this   ``` search above gives account_number and other fields ``` | lookup mylookup account_number output account_number as match_account | where account_number == match_account   Is this something you are looking for?
In what way didn't it work? I am not using javascript and init blocks work for me!
Hi @danspav , I added the event handlers and updated $row.URL.value$ in the link to custom url as per your ask, but still the url is not reflecting as hyperlink. Here is the source code and I shared... See more...
Hi @danspav , I added the event handlers and updated $row.URL.value$ in the link to custom url as per your ask, but still the url is not reflecting as hyperlink. Here is the source code and I shared the screenshot of the table below. Thank you.    "visualizations": {         "viz_qFxEKJ3l": {             "type": "splunk.table",             "options": {                 "count": 5000,                 "dataOverlayMode": "none",                 "drilldown": "none",                 "backgroundColor": "#FAF9F6",                 "tableFormat": {                     "rowBackgroundColors": "> table | seriesByIndex(0) | pick(tableAltRowBackgroundColorsByBackgroundColor)",                     "headerBackgroundColor": "> backgroundColor | setColorChannel(tableHeaderBackgroundColorConfig)",                     "rowColors": "> rowBackgroundColors | maxContrast(tableRowColorMaxContrast)",                     "headerColor": "> headerBackgroundColor | maxContrast(tableRowColorMaxContrast)"                 },                 "eventHandlers": [                     {                         "type": "drilldown.customUrl",                         "options": {                             "url": "$row.URL.value|n$",                             "newTab": true                         }                     }                 ],        
@isoutamo  It has been a crazy week. And now I'm going on vacation. We'll take this up again in June. Thanks for all your help and God bless, Genesius
Your table command contains two fields, one of which is not a number.   Single-value visualization really wants you to have only a single value.  Otherwise you are just confusing the visualizer.
Hi Team, Good day! I need to build query in such way that need to get only success payload that are related to particular service name. where that service name is used by different application ... See more...
Hi Team, Good day! I need to build query in such way that need to get only success payload that are related to particular service name. where that service name is used by different application such like (EDS, CDS). we need to pull the data from request payload to Response payload success based on correlation ID which is present in request payload and each event contain unique Correlation ID. and we are using below query to pull the data for request payload. index="os" host="abcd*" source="/opt/os/*/logs/*" "implementation:abc-field-flow" "TargetID":"abc" "Sender":"SenderID":"abc" By using above query, we are getting below raw data: INFO 2024-05-23 06:05:30,275 [[OS].uber.11789: [services-workorders-procapi].implementation:abc-field-flow.CPU_LITE @7d275f1b] [event: 2-753d5970-18ca-11ef-8980-0672a96fbe16] com.wing.esb: PROCESS :: implementation:abc-field-flow :: STARTED :-: CORRELATION ID :: 2-753d5970-18ca-11ef-8980-0672a96fbe16 :-: REQUEST PAYLOAD :: {"Header":{"Target":{"TargetID":"abc"},"Sender":{"SenderID":"abc"}},"DataArea":{"workOrder":"42141","unitNumber":"145","timestamp":"05/23/2024 00:53:57","nbSearches":"0","modelSeries":"123","manufacturer":"FLY","id":"00903855","faultCode":"6766,1117,3497,3498,3867,6255,Blank","faliurePoint":"120074","faliureMeasure":"MI","eventType":"DBR","event":[{"verificationStatus":"Y","timestamp":"05/23/2024 01:32:30","solutionSeq":"1","solutionId":"S00000563","searchNumber":"0","searchCompleted":"True","repairStatus":"N","informationType":"","componentID":""},{"verificationStatus":"Y","timestamp":"05/23/2024 01:32:30","solutionSeq":"2","solutionId":"S00000443","searchNumber":"0","searchCompleted":"True","repairStatus":"N","informationType":"","componentID":""},{"verificationStatus":"Y","timestamp":"05/23/2024 02:03:25","solutionSeq":"3","solutionId":"S00000933","searchNumber":"0","searchCompleted":"True","repairStatus":"Y","informationType":"","componentID":""}],"esn":"12345678","dsStatus":"Open","dsID":"00903855","dsClosureType":null,"customerName":"Tar Wars","createDate":"05/23/2024 00:53:49","application":"130","accessSRTID":""}} And we are using below query for response payload:  index="OS" host="abcd*" source="/opt/os/*/logs/*" "implementation:abc-field-flow" "status": "SUCCESS" By using above query, we are getting below raw data: 5/23/24 11:35:33.618 AM INFO 2024-05-23 06:05:33,618 [[OS].uber.11800: [services-workorders-procapi].implementation:abc-field-flow.CPU_INTENSIVE @4366240b] [event: 2-753d5970-18ca-11ef-8980-0672a96fbe16] com.wing.esb: PROCESS :: implementation::mainFlow :: COMPLETED :-: CORRELATION ID :: 2-753d5970-18ca-11ef-8980-0672a96fbe16 :-: RESPONSE PAYLOAD :: { "MessageIdentifier": "2-753d5970-18ca-11ef-8980-0672a96fbe16", "ReturnCode": 0, "ReturnCodeDescription": "", "status": "SUCCESS", "Message": "Message Received" } The above two quires raw data in the request payload correlation id should match to the response payload correlation id. So based on that I want to search query to pull only data from request payload to response payload based on the Correlation ID. How to build the query by using two search quires I want only response payload data from two quires. Thanks in advance for your help! Regards, Vamshi Krishna M.
Hi @tej57 , thank you for sharing the code, it's working but when we run this query we see we getting field values as "Null" Eg: Service.app.116 Service.ast.24 Service.srt.22 Null Is there a ch... See more...
Hi @tej57 , thank you for sharing the code, it's working but when we run this query we see we getting field values as "Null" Eg: Service.app.116 Service.ast.24 Service.srt.22 Null Is there a chance to rename this null as non-servicecode.  
A loadjob is results of  previously completed search job from my reports created. I am trying to filter after the ingestion. I have all the data there, I just need to some account numbers, and I don'... See more...
A loadjob is results of  previously completed search job from my reports created. I am trying to filter after the ingestion. I have all the data there, I just need to some account numbers, and I don't want to break the data into multiple files to get all the data needed... hence why I asked.  You way would work, but I only have a 50K join limit, so I will not get all the results. I need all 104K to pass through this subsearch.
I've created trained a Density Function using data but ONLY want it to output outliers that exceed the upper bound and not below the lower bound. How would I do this? My search: index=my_ind... See more...
I've created trained a Density Function using data but ONLY want it to output outliers that exceed the upper bound and not below the lower bound. How would I do this? My search: index=my_index | bin _time span=1d | stats sum(numerical_feature) as daily_sum by department, _time | apply my_model Currently it is showing all outliers.
Apps under search head under /opt/splunk/etc/apps/ are not replicating to search peers /opt/splunk/var/run/searchpeers/ Here is my setup - I have a standalone search head which has indexers as searc... See more...
Apps under search head under /opt/splunk/etc/apps/ are not replicating to search peers /opt/splunk/var/run/searchpeers/ Here is my setup - I have a standalone search head which has indexers as search peers. I have deployed apps to search head and they are not replicating to search peers.
Can you explain what a "loadjob" is? Normally, if data is already ingested, and you have this lookup file, all you need to do is a subsearch index=myindex sourcetype=mysourcetype [inputlookup mylo... See more...
Can you explain what a "loadjob" is? Normally, if data is already ingested, and you have this lookup file, all you need to do is a subsearch index=myindex sourcetype=mysourcetype [inputlookup mylookup | fields account] If you are trying to filter before ingestion, Splunk cannot really do anything.
Use single quote to protect field names when there is some undesirable side effects from flattened JSON path. (search command cannot finesse this, unfortunately.) index="ss-prd-dkp" "*price?sailingI... See more...
Use single quote to protect field names when there is some undesirable side effects from flattened JSON path. (search command cannot finesse this, unfortunately.) index="ss-prd-dkp" "*price?sailingId=IC20240810&currencyIso=USD&categoryId=pt_internet" | where status=500 AND 'context.duration' == 428.70000000006985 AND 'context.env.cookiesSize' == 7670 But note: Please use raw text format when sharing structured data. Those spath commands are not necessary.
@gcusello ,@Thank  you
Please help on that  
I'm not sure if anyone has found the exact problem in your situation, but looks like you may missing the attribute certBasedUserAuthPivOidList.  I do see errors for OID not found in client cert.  The... See more...
I'm not sure if anyone has found the exact problem in your situation, but looks like you may missing the attribute certBasedUserAuthPivOidList.  I do see errors for OID not found in client cert.  The default value is Microsoft Universal Principal Name, but you may need to change it.  Or try changing certBasedUserAuthMethod from PIV to EDIPI.    Hope this helps.