All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

(?i)^AZ-(?<temp_hostname4>([-A-Z0-9]+))(?:[-A-Z0-9]+?)(?=\1).*-VMSS$ According to regex101, it matches your 3 events in slightly less than 13k (which gives about 4k steps per event) https://regex10... See more...
(?i)^AZ-(?<temp_hostname4>([-A-Z0-9]+))(?:[-A-Z0-9]+?)(?=\1).*-VMSS$ According to regex101, it matches your 3 events in slightly less than 13k (which gives about 4k steps per event) https://regex101.com/r/8h4zwD/1
Were you able to find a solution for this?
From what I've read in the ScienceLogic docs (Logging in the Integration Service Platform": If you use your own existing logging server, such as Syslog, Splunk, or Logstash, the Integration Service ... See more...
From what I've read in the ScienceLogic docs (Logging in the Integration Service Platform": If you use your own existing logging server, such as Syslog, Splunk, or Logstash, the Integration Service can route its logs to a customer-specified location. To do so, just attach your service, such as logspout, to the Microservice stack and configure your service to route all logs to the server of your choice.   They seem to have an API for integrating with other systems you may want to check out too: https://docs.sciencelogic.com/latest/Content/Web_Content_Dev_and_Integration/ScienceLogic_API/api_title_page.htm?Highlight=API
Hello Splunker, Hope you had a great day! as per the below picture :             Q1:- I need to understand the exact process of creating the TSIDX file and its content and how actually i... See more...
Hello Splunker, Hope you had a great day! as per the below picture :             Q1:- I need to understand the exact process of creating the TSIDX file and its content and how actually it speeds the search? Q2:- Why the size of the tsidx file is bigger than the raw data itself 35% /15%? Q3:- what is the difference between tsidx file and datamodel summary? I am expecting a long answer and more details, actually i like details! Thanks in advance!  
Hi @BRFZ , sorry but I don't understand: in the Splunk_TA_windows inputs.conf there are some stanzas for Active Directory: ###### WinEventLog Inputs for Active Directory ###### ## Application and... See more...
Hi @BRFZ , sorry but I don't understand: in the Splunk_TA_windows inputs.conf there are some stanzas for Active Directory: ###### WinEventLog Inputs for Active Directory ###### ## Application and Services Logs - DFS Replication [WinEventLog://DFS Replication] disabled = 1 renderXml=true ## Application and Services Logs - Directory Service [WinEventLog://Directory Service] disabled = 1 renderXml=true ## Application and Services Logs - File Replication Service [WinEventLog://File Replication Service] disabled = 1 renderXml=true ## Application and Services Logs - Key Management Service [WinEventLog://Key Management Service] disabled = 1 renderXml=true ###### WinEventLog Inputs for DNS ###### [WinEventLog://DNS Server] disabled=1 renderXml=true ###### DHCP ###### [monitor://$WINDIR\System32\DHCP] disabled = 1 whitelist = DhcpSrvLog* crcSalt = <SOURCE> sourcetype = DhcpSrvLog you could enable them (disable = 0) and add a row with the index in which store the data in each stanza. what's the issue? Anyway, think about what I said in my previous answer: why use a different index? Ciao. Giuseppe
Hi @ITWhisperer , thanks for reaching out,  as part of my test the ITSI app permissions are set to read and write for "Everyone". Also an app called ITOA Backend with folder name SA-ITOA has the same... See more...
Hi @ITWhisperer , thanks for reaching out,  as part of my test the ITSI app permissions are set to read and write for "Everyone". Also an app called ITOA Backend with folder name SA-ITOA has the same permissions set.
Based on that output your UF’s cannot read / found those files. Are you absolutely sure that you are using the same account which are used to run splunkd? As @PickleRick said you should check is there... See more...
Based on that output your UF’s cannot read / found those files. Are you absolutely sure that you are using the same account which are used to run splunkd? As @PickleRick said you should check is there any issue with SElinux.
What does your timechart currently give you? Daily counts, hourly counts? What does "average" mean in this context? Does previous 3 months include the current month or only complete months prior to t... See more...
What does your timechart currently give you? Daily counts, hourly counts? What does "average" mean in this context? Does previous 3 months include the current month or only complete months prior to the current month? Please provide some sample representative anonymised events and a representation of what your output results would be (as a table not a graph).
  source=*.log host=myhostname "provider=microsoft" "status=SENT_TO_AGENT" | timechart dedup_splitvals=t limit=10 useother=t count AS "Count of Event Object" by provider format=$VAL$:::$AGG$ | field... See more...
  source=*.log host=myhostname "provider=microsoft" "status=SENT_TO_AGENT" | timechart dedup_splitvals=t limit=10 useother=t count AS "Count of Event Object" by provider format=$VAL$:::$AGG$ | fields + _time, "*"   This will display a count of entries in the logs that say "SENT_TO_AGENT" I want to display an average line chart for previous 3 months, and the current month as an overlay over the previous months. 
Does your custom user (role) have the correct access to the ITSI app?
Hi Splunkers, I am trying to extract a string within a string, which has been repeated, with the addition of some pre- and -post fixes, only the very start and end of the string are static values ... See more...
Hi Splunkers, I am trying to extract a string within a string, which has been repeated, with the addition of some pre- and -post fixes, only the very start and end of the string are static values ('AZ-' and '-VMSS'). Example data: AZ-203-dev-app-1-build-agents-203-dev-app-1-build-agents0006GA-1720624093-VMSS AZ-eun-dev-005-pqu-ado-vmss-eun-dev-005-pqu-ado-vmss005X89-1720625975-VMSS AZ-DEV-CROSS-SUBSCRIPTION-PROXY-EUN-BLUE-DEV-CROSS-SUBSCRIPTION-PROXY-EUN-BLUE000000-1720637733-VMSS   I have a working rex command to extract the relevant data (temp_hostname4): | rex field=source_hostname "(?i)^AZ(?<cap1>(-[A-Z0-9]+)+)(?=\1[A-Z0-9]{6})-(?<temp_hostname4>([A-Z0-9]+-?)+)-\d{10}-VMSS$"   Which correctly extracts: 203-dev-app-1-build-agents0006GA eun-dev-005-pqu-ado-vmss005X89 DEV-CROSS-SUBSCRIPTION-PROXY-EUN-BLUE000000   But let's face it, this is horrible! According to regex101 this takes 46K+ steps, which can't be nice for Splunk to apply to c.20K records several times per day. Can anyone suggest optimisations to bring that number down?   For added complication (and for clarity to anyone reading this) it's temp_hostname4 because there are multiple other ways the hostname might have been... manipulated before it gets to Splunk, sometimes with the string repeated, sometimes not, resulting in the following SPL - I could use coalesce rather than case, but that's hardly important right now, and separating the regex statements seemed like the saner thing to do in this instance | rex field=source_hostname "(?i)^AZ(?<cap1>(-[A-Z0-9]+)+)(?=\1[A-Z0-9]{6})-(?<temp_hostname4>([A-Z0-9]+-?)+)-\d{10}-VMSS$" | rex field=source_hostname "(?i)^AZ-(?<temp_hostname3>[^.]+)-\d{10}-VMSS$" | rex field=source_hostname "(?i)^AZ-(?<temp_hostname2>[^.]+)-\d{10}$" | rex field=source_hostname "(?i)^(?<temp_hostname1>[^.]+)_\d{10}$" | eval alias_source_of=case( !isnull(temp_hostname4), temp_hostname4, !isnull(temp_hostname3), temp_hostname3, !isnull(temp_hostname2), temp_hostname2, !isnull(temp_hostname1), temp_hostname1, 1=1, null() ) Any suggestions for optimisations of the regex would be gratefully appreciated.
I am unable to find and add-on or app in Splunkbase for getting ScienceLogic events into Splunk.  Does anybody have a solution for getting ScienceLogic metrics/events into Splunk?
Hello Splunkers, I have question, I'm trying to configure a custom role in Splunk where I'm assigning capabilities natively.  I'm recreating the default capabilities assigned to User in Splunk Enter... See more...
Hello Splunkers, I have question, I'm trying to configure a custom role in Splunk where I'm assigning capabilities natively.  I'm recreating the default capabilities assigned to User in Splunk Enterprise and itoa_user in Splunk ITSI without using the inheritance option (doing this as a test so I can later remove capabilities as I need to).  The problem I have is that once I save the role with all 65 matching capabilities selected and login as the testuser assigned to that role, dashboards that use the "getservice" command in their searches do not work and display the following error: [subsearch]: command="getservice", [HTTP 403] Client is not authorized to perform requested action; https://127.0.0.1:8089/servicesNS/nobody/SA-ITOA/storage/collections/config/itsi_team This issue does not happen when I simply select Inherit capabilities for User and itoa_user. Any ideas as to what could be causing this issue? I'm running splunk version 9.1.1
I had a similar desire to change the number of fields displayed dependant on a condition. mine was triggered by a dropdown selection, so I set a token when the drop down was  changed   ,that token he... See more...
I had a similar desire to change the number of fields displayed dependant on a condition. mine was triggered by a dropdown selection, so I set a token when the drop down was  changed   ,that token held a list  of the fields i wanted to display. at the end of my search i used  | fields=$myfields$ and it works perfectly. dont think it is possible within the search it self, but if the fields could be set based on the results of another search or an input box it should be possible
I got it:   | eval start_time=strptime(start_time,"%Y/%m/%d %H:%M") | eval end_time=strptime(end_time,"%Y/%m/%d %H:%M") | eval range_start_time = relative_time(start_time, "@h") | eval range_end_ti... See more...
I got it:   | eval start_time=strptime(start_time,"%Y/%m/%d %H:%M") | eval end_time=strptime(end_time,"%Y/%m/%d %H:%M") | eval range_start_time = relative_time(start_time, "@h") | eval range_end_time = relative_time(end_time, "+1h@h") | eval range_start_time=mvrange(range_start_time, range_end_time, "1h") | mvexpand range_start_time | eval range_end_time=range_start_time+3600 | eval end_time=min(range_end_time, end_time) | eval start_time=max(range_start_time, start_time) | eval start_time=strftime(start_time,"%Y/%m/%d %H:%M") | eval end_time=strftime(end_time,"%Y/%m/%d %H:%M") | eval range_start_time=strftime(range_start_time,"%Y/%m/%d %H:%M") | eval range_end_time=strftime(range_end_time,"%Y/%m/%d %H:%M")
In case anyone else stumbles upon this thread, this solution worked for me.
The number of backslashes in the data doesn't matter, it is the number of backslashes in the regex string I was talking about. Backslashes normally need to be escaped (with a backslash), however, som... See more...
The number of backslashes in the data doesn't matter, it is the number of backslashes in the regex string I was talking about. Backslashes normally need to be escaped (with a backslash), however, sometimes these backslashes have to be escaped as well, hence the need for 4 backslashes to represent a single backslash. Try something like this (which allows for escaped (backslashed) commas in all the columns). (?<Col1>.+?(?<!\\\\)),(?<Col2>.+?(?<!\\\\)),(?<Col3>.+?(?<!\\\\)),(?<Col4>.+?(?<!\\\\)),(?<Col5>.+?(?<!\\\\)),(?<Col6>.+?(?<!\\\\)),(?<Col7>.+?(?<!\\\\)),(?<Col8>.+?(?<!\\\\)),(?<Col9>.+?(?<!\\\\)),(?<Col10>.+?(?<!\\\\)),(?<Col11>.+?(?<!\\\\))$ Again, it might be that you only need two backslashes each time instead of four.
| eval FieldName=split(FieldName," ")
Anyone able to successfully run Independent Stream Forwarder on Fedora or Debian? I have inherited a small stand-alone, bare-metal Splunk Enterprise 9.1.2 running on Fedora 39. I'm trying to point a ... See more...
Anyone able to successfully run Independent Stream Forwarder on Fedora or Debian? I have inherited a small stand-alone, bare-metal Splunk Enterprise 9.1.2 running on Fedora 39. I'm trying to point a Netflow stream at the ISF installed on this same server but I'm getting blank screens in Distributed Forwarder Manager and Configure Streams on the Splunk Stream App that is also installed on the same server. Thank you!
I fixed it! It was not the capabilities that were at fault, it was the curl command. the documentation says to use the following to create an index: curl -k -u editor-user:MyPasword1 https://loca... See more...
I fixed it! It was not the capabilities that were at fault, it was the curl command. the documentation says to use the following to create an index: curl -k -u editor-user:MyPasword1 https://localhost:8089/servicesNS/admin/myapp/data/indexes -d name=newindex The REST API call is asking to make changes in the admin namespace, but the indexes are in the nobody namespace, so I needed to change it to be this and then it worked: curl -k -u editor-user:MyPasword1 https://localhost:8089/servicesNS/nobody/myapp/data/indexes -d name=newindex