All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

But i have issues with ".url.com" since it don't exactly matches the hostname. I have tried to replace them with "*.url.com" but splunk lookup don't match wildcard. This is not correct.  As @andre... See more...
But i have issues with ".url.com" since it don't exactly matches the hostname. I have tried to replace them with "*.url.com" but splunk lookup don't match wildcard. This is not correct.  As @andrew_nelson points out.  The problem is that you are trying to use inputlookup when lookup is the logical solution.  Once you define a lookup with WILDCARD(url), you do not need to add an additional field, however. (You may want to use case-insensitive match, too.)  This is how you do it in Splunk Web: Here, I name the lookup definition without .csv. This is the search to count matches per url as defined in the lookup.   index=my-proxy [inputlookup all_urls | rename url as hostname ] | lookup all_urls url as hostname output url as url | stats count by url   This does effectively the same as Andrew's except you don't need to add a second column.  You also do not need a where command because the inputlookup subsearch already does that. I understand that your reason of using inputlookup is to print 0 if there is no match.  So you add one more step:   | append [inputlookup all_urls] | stats values(count) as count by url | fillnull count   Given the following events in index my-proxy (assuming field hostname is already extracted at search time and represents the destination in your proxy log): _time hostname 1969-12-31 16:00:01 abc.url2.com 1969-12-31 16:00:02 def.url1.com 1969-12-31 16:00:03 ghi.url2.com 1969-12-31 16:00:04 www.url1.com 1969-12-31 16:00:05 site.url2.com 1969-12-31 16:00:06 abc.url1.com 1969-12-31 16:00:07 def.url2.com 1969-12-31 16:00:08 ghi.url1.com 1969-12-31 16:00:09 www.url2.com 1969-12-31 16:00:10 site.url1.com 1969-12-31 16:00:11 abc.url2.com 1969-12-31 16:00:12 def.url1.com 1969-12-31 16:00:13 ghi.url2.com 1969-12-31 16:00:14 www.url1.com 1969-12-31 16:00:15 site.url2.com the above search should give you url count *.url2.com 8 site.url3.com 0 www.url1.com 2 Here is an emulation for you to play with and compare with real data   | makeresults count=15 | streamstats count as _time | eval _domain = json_object(1, "abc", 2, "def", 3, "ghi", 4, "www", 0, "site") | eval hostname = json_extract(_domain, tostring(_time % 5)) . ".url" . (_time % 2 + 1) . ".com" ``` the above emulates index=my-proxy [inputlookup all_urls | rename url as hostname ] ```      
Configured as below.. Now the error is resolved - But not getting the jenkins logs into splunk . only seeing the below response in Splunk   Configuration :  [http://jenkins_build_logs] descriptio... See more...
Configured as below.. Now the error is resolved - But not getting the jenkins logs into splunk . only seeing the below response in Splunk   Configuration :  [http://jenkins_build_logs] description = Jenkins build Logs disabled = 0 sourcetype = jenkins:build token =  useACK = 0   Logs in splunk ping from jenkins plugin raw event ping
@Maries  NOTE:  You can keep the index to the default (main, in general) or ‘jenkins’  or whatever you prefer while setting up the token, as the Splunk app for Jenkins is capable of filtering the ev... See more...
@Maries  NOTE:  You can keep the index to the default (main, in general) or ‘jenkins’  or whatever you prefer while setting up the token, as the Splunk app for Jenkins is capable of filtering the events and redirecting them to the correct pre-configured indexes(this app ships with  four indexes – Jenkins, Jenkins_statistics, Jenkins_console, Jenkins_artifact).
@Maries Check this  https://plugins.jenkins.io/splunk-devops/  https://medium.com/cloud-native-daily/monitoring-made-easy-enhancing-ci-cd-with-splunk-and-jenkins-integration-576eab0bff9 
@MariesDid you create the index on the indexer?
@ashutoshh Splunk sales team will respond, if not you can call directly and explain your requirement. The other option is,  Many companies resell Splunk ITSI licenses. You can check for Splunk Partne... See more...
@ashutoshh Splunk sales team will respond, if not you can call directly and explain your requirement. The other option is,  Many companies resell Splunk ITSI licenses. You can check for Splunk Partners or Authorized Resellers in your region.  Purchasing or transferring a Splunk ITSI license from an individual or third party who is not an authorized reseller violates Splunk's licensing agreement. Splunk licenses are non-transferable and bound to the original purchasing entity. Unauthorized sharing or selling of licenses can lead to compliance issues, termination of the license, or legal action from Splunk.
Hi , i did this already 
@ashutoshh   Same goes, for example, with Enterprise Security App.  Reach out to Splunk Sales to discuss your requirements and get a quote for the ITSI app.   
Team, I'm trying to push Jenkins Build Logs to Splunk.   Installed Splunk Plugin (1.10.1) in my Cloudbees Jenkins. Configured HTTP host,  port & token - Tested Connection and it looks good.   In... See more...
Team, I'm trying to push Jenkins Build Logs to Splunk.   Installed Splunk Plugin (1.10.1) in my Cloudbees Jenkins. Configured HTTP host,  port & token - Tested Connection and it looks good.   In Splunk, created a HEC Input in the below file with the content as below File name :  /opt/app/splunk/etc/apps/splunk_httpinput/local/inputs.conf   [http://jenkins_build_logs] description = Jenkins build Logs disabled = 0 index = infra indexes = infra sourcetype = jenkins:build token =  useACK = 0   Getting the below error in the Splunk logs -  /opt/app/splunk/var/log/splunk 02-08-2025 04:52:07.704 +0000 ERROR HttpInputDataHandler [17467 HttpDedicatedIoThread-1] - Failed processing http input, token name=jenkins_build_logs, channel=n/a, source_IP=10.212.102.217, reply=7, status_message="Incorrect index", status=400, events_processed=1, http_input_body_size=381, parsing_err="invalid_index='jenkins_console'" 02-08-2025 04:54:14.617 +0000 ERROR HttpInputDataHandler [17467 HttpDedicatedIoThread-1] - Failed processing http input, token name=jenkins_build_logs, channel=n/a, source_IP=10.212.100.150, reply=7, status_message="Incorrect index", status=400, events_processed=1, http_input_body_size=317, parsing_err="invalid_index='jenkins_statistics'"
Hi try contacting the sales as per the options and submitted email but no response  do you know someone who has that license i can pay for that 
@ashutoshh Splunk ITSI is a premium app, so it requires an additional license beyond the standard Splunk Enterprise license. If you purchased it you will need to make sure that in the 'entitlement' y... See more...
@ashutoshh Splunk ITSI is a premium app, so it requires an additional license beyond the standard Splunk Enterprise license. If you purchased it you will need to make sure that in the 'entitlement' your name is listed with your email.  Otherwise whomever is listed on the entitlement can download it for you.  
Hi there, i am new to this community but i want to understand how to purchase splunk ITSI , i already splunk Enterprise  license(purchased from aws marketplace) and free both . long back i have us... See more...
Hi there, i am new to this community but i want to understand how to purchase splunk ITSI , i already splunk Enterprise  license(purchased from aws marketplace) and free both . long back i have used splunk itsi for free with enterprise license but it need some auth and saying my user is not listed in autorized list while downloading ITSI please do help me for the same 
Hi @anissabnk, As a quick workaround in a classic dashboard, you can use colorPalette elements with type="expression" to highlight cells if the cell value also includes the status: <dashboard versi... See more...
Hi @anissabnk, As a quick workaround in a classic dashboard, you can use colorPalette elements with type="expression" to highlight cells if the cell value also includes the status: <dashboard version="1.1" theme="light"> <label>anissabnk_table</label> <row depends="$hidden$"> <html> <style> #table1 th, #table1 td { text-align: center !important } </style> </html> </row> <row> <panel> <table id="table1"> <search> <query>| makeresults format=csv data=" _time,HOSTNAME,PROJECTNAME,JOBNAME,INVOCATIONID,RUNSTARTTIMESTAMP,RUNENDTIMESTAMP,RUNMAJORSTATUS,RUNMINORSTATUS,RUNTYPENAME 2025-01-20 04:38:04.142,AEW1052ETLLD2,AQUAVISTA_UAT,Jx_104_SALES_ORDER_HEADER_FILE,HES,2025-01-19 20:18:25.0,,STA,RUN,Run 2025-01-19 04:38:04.142,AEW1052ETLLD2,AQUAVISTA_UAT,Jx_104_SALES_ORDER_HEADER_FILE,HES,2025-01-18 20:18:25.0,2025-01-18 20:18:29.0,FIN,FWF,Run 2025-01-18 04:38:04.142,AEW1052ETLLD2,AQUAVISTA_UAT,Jx_104_SALES_ORDER_HEADER_FILE,HES,2025-01-17 20:18:25.0,2025-01-17 20:18:29.0,FIN,FOK,Run 2025-01-17 04:38:04.142,AEW1052ETLLD2,AQUAVISTA_UAT,Jx_104_SALES_ORDER_HEADER_FILE,HES,2025-01-16 20:18:25.0,2025-01-16 20:18:29.0,FIN,FWW,Run 2025-01-16 04:38:04.142,AEW1052ETLLD2,AQUAVISTA_UAT,Jx_104_SALES_ORDER_HEADER_FILE,HES,2025-01-15 20:18:25.0,2025-01-15 20:18:29.0,FIN,HUH,Run " | eval _time=strptime(_time, "%Y-%m-%d %H:%M:%S.%Q") | search PROJECTNAME="*" INVOCATIONID="*" RUNMAJORSTATUS="*" RUNMINORSTATUS="*" | eval status=case(RUNMAJORSTATUS="FIN" AND RUNMINORSTATUS="FWW", "Completed with Warnings", RUNMAJORSTATUS="FIN" AND RUNMINORSTATUS="FOK", "Successful Launch", RUNMAJORSTATUS="FIN" AND RUNMINORSTATUS="FWF", "Failure", RUNMAJORSTATUS="STA" AND RUNMINORSTATUS="RUN", "In Progress", 1=1, "Unknown") | eval tmp=JOBNAME."|".INVOCATIONID | eval date=strftime(strptime(RUNSTARTTIMESTAMP, "%Y-%m-%d %H:%M:%S.%Q"), "%Y-%m-%d") | eval value=if(status=="Unknown", "Unknown", "start time: ".coalesce(strftime(strptime(RUNSTARTTIMESTAMP, "%Y-%m-%d %H:%M:%S.%Q"), "%H:%M"), "").urldecode("%0a").if(status=="In Progress", "Running", "end time: ".coalesce(strftime(strptime(RUNENDTIMESTAMP, "%Y-%m-%d %H:%M:%S.%Q"), "%H:%M"), ""))).urldecode("%0a").status | xyseries tmp date value | eval tmp=split(tmp, "|"), Job=mvindex(tmp, 0), Country=mvindex(tmp, 1) | fields - tmp | table Job Country *</query> </search> <option name="drilldown">none</option> <option name="wrap">true</option> <format type="color"> <colorPalette type="expression">case(like(value, "%Unknown"), "#D3D3D3", like(value, "%Successful Launch"), "#90EE90", like(value, "%Failure"), "#F0807F", like(value, "%Completed with Warnings"), "#FEEB3C", like(value, "%In Progress"), "#ADD9E6")</colorPalette> </format> </table> </panel> </row> </dashboard> There may be arcane methods for formatting cells without using JavaScript or including the status in the value, but I don't have them readily available.
It is often much easier for volunteers to provide answers (particularly to search/SPL questions) if you post sample events in their raw format so that we can attempt to simulate your situation and de... See more...
It is often much easier for volunteers to provide answers (particularly to search/SPL questions) if you post sample events in their raw format so that we can attempt to simulate your situation and design solutions to meet your needs. We do not have the benefit of access to your data so you need to give us something to work with.
I'll keep this question open another day or so. I'm thrilled I managed to solve the issue, but I'll admit......the solution isn't exactly as clean and efficient as I'd like it. If anyone smarter than... See more...
I'll keep this question open another day or so. I'm thrilled I managed to solve the issue, but I'll admit......the solution isn't exactly as clean and efficient as I'd like it. If anyone smarter than me wants to propose a better soltion, I'm happy to hear it.
Sorry I wasn't clear enough. There are two shared fields: mac_add and ip_add. However, I need to be able summarize by the Session_ID field. Because the field isn't shared, I first summarize by mac... See more...
Sorry I wasn't clear enough. There are two shared fields: mac_add and ip_add. However, I need to be able summarize by the Session_ID field. Because the field isn't shared, I first summarize by mac_add and ip_add in the first stats command. Then in the second, I summarize by Session_ID. The issue is that the time field becomes a multi-value field with the time stamps for each of the events summarized rather than a unique timestamp for each Sesion_ID. Mhmm....maybe I can mvzip a Session_ID and it's time field together to keep  the pair together between stats and split them apart further down the pipeline.... --------------------- The answer to that question is YES! I can do exactly that, and it fixes the problem. What I did was use mvzip to combine the Session_ID and time for into a new field session_time after the first stats command. Then, after the second stats summarizing by the Session_ID field, I split apart the session_time field with mvexpand to get individual events pairing a session time with its time. I then used rex to split that pair into two new fields, a session and time field. Finally, a dedup to clean out the duplicates, and it was done! This is the command now. (index=indexA) OR (index=indexB) | rex field=text "AuditSessionID (?<SessionID>\w+)" | rex field=pair "session-id=(?<SessionID>\w+)" | eval time_{index}=strftime(_time,"%F %T") | eval ip_add=coalesce(IP_Address, assigned_ip), mac_add=coalesce(upper(src_mac), upper(mac)) | eval auth=case(CODE=45040, "True", true(), "False") | stats values(host_name) as hostname values(networkSwitch) as Switch values(switchPort) as Port values(auth) as Auth values(SessionID) as Session_ID values(time_indexA) as time by mac_add, ip_add | eval session_time=mvzip(Session_ID, time) | stats values(time) as time values(hostname) as hostname values(Switch) as Switch values(Port) as Port values(Auth) as Auth values(ip_add) as IP_Address values(mac_add) as MAC_Address by Session_ID | mvexpand session_time | fields - time Session_ID | rex field=session_time "(?<Session_ID>\w+),(?<Time>.+)" | fields - sesison_time | dedup Session_ID Time | table Time Hostname MAC_Address IP_Address Switch Port Auth Session_ID
If data from the TA is not being indexed then ITSI cannot find it and display it. Why is the data not indexed?
In order to get data from Splunk you must first get the data into Splunk. Splunk is a data processing platform but you need go have something to be processed. How would you get that data? Where from?... See more...
In order to get data from Splunk you must first get the data into Splunk. Splunk is a data processing platform but you need go have something to be processed. How would you get that data? Where from? If the only way to produce such data is running gpresult, you need to run it and store the results somehow in Splunk.
You can focus it to the first column with something like this inside the <panel> <html depends="$alwaysHideCSS$"> <style> /* Right align only the first column of the table */ ... See more...
You can focus it to the first column with something like this inside the <panel> <html depends="$alwaysHideCSS$"> <style> /* Right align only the first column of the table */ #table1 .table th:nth-child(1), #table1 .table td:nth-child(1) { text-align: right!important; } </style> </html>
Thanks, Kiran! I am reading up on this now.!