All Topics

Top

All Topics

I know how to set Values. multiselect.val(value_array); BUT:  Is there a way to set the labels to a different value (not the actual value)? For Example: i want to be able to select the country by... See more...
I know how to set Values. multiselect.val(value_array); BUT:  Is there a way to set the labels to a different value (not the actual value)? For Example: i want to be able to select the country by its name but in the search i use the country code. Something like: multiselect.label(lable_array); or: multiselect.val(value_array , label_array); I tried an array with label-value pairs but it did not work.
Hi, After the migration of our McAfee ePO server, I want to change the SQL query to reflect the changes made in the ePO database. But when I want to clic on "next" on the DataLab, the step 4 is faul... See more...
Hi, After the migration of our McAfee ePO server, I want to change the SQL query to reflect the changes made in the ePO database. But when I want to clic on "next" on the DataLab, the step 4 is faulty. The query is OK (in the DataLab, it return values), the "rising" parameters seems to be good, so I don't see what's going wrong. Thank's in advance for your help. Below the query :  SELECT [EPOEvents].[ReceivedUTC] as [timestamp], [EPOEvents].[AutoID], [EPOEvents].[ThreatName] as [signature], [EPOEvents].[ThreatType] as [threat_type], [EPOEvents].[ThreatEventID] as [signature_id], [EPOEvents].[ThreatCategory] as [category], [EPOEvents].[ThreatSeverity] as [severity_id], [EPOEventFilterDesc].[Name] as [event_description], [EPOEvents].[DetectedUTC] as [detected_timestamp], [EPOEvents].[TargetFileName] as [file_name], [EPOEvents].[AnalyzerDetectionMethod] as [detection_method], [EPOEvents].[ThreatActionTaken] as [vendor_action], CAST([EPOEvents].[ThreatHandled] as int) as [threat_handled], [EPOEvents].[TargetUserName] as [logon_user], [EPOComputerPropertiesMT].[UserName] as [user], [EPOComputerPropertiesMT].[DomainName] as [dest_nt_domain], [EPOEvents].[TargetHostName] as [dest_dns], [EPOEvents].[TargetHostName] as [dest_nt_host], [EPOComputerPropertiesMT].[IPHostName] as [fqdn], [dest_ip] = ( convert(varchar(3),convert(tinyint,substring(convert(varbinary(4),convert(bigint,([EPOComputerPropertiesMT].[IPV4x] + 2147483648))),1,1)))+'.'+convert(varchar(3),convert(tinyint,substring(convert(varbinary(4),convert(bigint,([EPOComputerPropertiesMT].[IPV4x] + 2147483648))),2,1)))+'.'+convert(varchar(3),convert(tinyint,substring(convert(varbinary(4),convert(bigint,([EPOComputerPropertiesMT].[IPV4x] + 2147483648))),3,1)))+'.'+convert(varchar(3),convert(tinyint,substring(convert(varbinary(4),convert(bigint,([EPOComputerPropertiesMT].[IPV4x] + 2147483648))),4,1))) ), [EPOComputerPropertiesMT].[SubnetMask] as [dest_netmask], [EPOComputerPropertiesMT].[NetAddress] as [dest_mac], [EPOComputerPropertiesMT].[OSType] as [os], [EPOComputerPropertiesMT].[OSBuildNum] as [sp], [EPOComputerPropertiesMT].[OSVersion] as [os_version], [EPOComputerPropertiesMT].[OSBuildNum] as [os_build], [EPOComputerPropertiesMT].[TimeZone] as [timezone], [EPOEvents].[SourceHostName] as [src_dns], [src_ip] = ( convert(varchar(3),convert(tinyint,substring(convert(varbinary(4),convert(bigint,([EPOEvents].[SourceIPV4] + 2147483648))),1,1)))+'.'+convert(varchar(3),convert(tinyint,substring(convert(varbinary(4),convert(bigint,([EPOEvents].[SourceIPV4] + 2147483648))),2,1)))+'.'+convert(varchar(3),convert(tinyint,substring(convert(varbinary(4),convert(bigint,([EPOEvents].[SourceIPV4] + 2147483648))),3,1)))+'.'+convert(varchar(3),convert(tinyint,substring(convert(varbinary(4),convert(bigint,([EPOEvents].[SourceIPV4] + 2147483648))),4,1))) ), [EPOEvents].[SourceMAC] as [src_mac], [EPOEvents].[SourceProcessName] as [process], [EPOEvents].[SourceURL] as [url], [EPOEvents].[SourceUserName] as [source_logon_user], [EPOComputerPropertiesMT].[IsPortable] as [is_laptop], [EPOEvents].[AnalyzerName] as [product], [EPOEvents].[AnalyzerVersion] as [product_version], [EPOEvents].[AnalyzerEngineVersion] as [engine_version], [EPOEvents].[AnalyzerDATVersion] as [dat_version], [EPOProdPropsView_VIRUSCAN].[datver] as [vse_dat_version], [EPOProdPropsView_VIRUSCAN].[enginever64] as [vse_engine64_version], [EPOProdPropsView_VIRUSCAN].[enginever] as [vse_engine_version], [EPOProdPropsView_VIRUSCAN].[hotfix] as [vse_hotfix], [EPOProdPropsView_VIRUSCAN].[productversion] as [vse_product_version], [EPOProdPropsView_VIRUSCAN].[servicepack] as [vse_sp] FROM [EPOEvents] LEFT JOIN [EPOLeafNodeMT] ON [EPOEvents].[AgentGUID] = [EPOLeafNodeMT].[AgentGUID] LEFT JOIN [EPOProdPropsView_VIRUSCAN] ON [EPOLeafNodeMT].[AutoID] = [EPOProdPropsView_VIRUSCAN].[LeafNodeID] LEFT JOIN [EPOComputerPropertiesMT] ON [EPOLeafNodeMT].[AutoID] = [EPOComputerPropertiesMT].[ParentID] LEFT JOIN [EPOEventFilterDesc] ON [EPOEvents].[ThreatEventID] = [EPOEventFilterDesc].[EventId] WHERE [EPOEvents].[AutoID] > ? AND ([EPOEventFilterDesc].[Language]='0409') ORDER BY [EPOEvents].[AutoID] ASC
Good afternoon! I have a XPRT_002_SYSAT-41777_202110020712.csv file. After some time, exactly the same XPRT_002_SYSAT-41777_202110020712.csv file appears in my directory, with exactly the same conte... See more...
Good afternoon! I have a XPRT_002_SYSAT-41777_202110020712.csv file. After some time, exactly the same XPRT_002_SYSAT-41777_202110020712.csv file appears in my directory, with exactly the same content, but with a different modification time. In this case, the system indexes all events from this file twice and I have duplicates. I know that they can be filtered by means of dedup _raw, but it is not my way because it very strongly worsens search performance. Are there any other ways to configure indexing based on file changes rather than name and size, and if they match, do not index again? Tried: crcSalt = <SOURCE> CHECK_METHOD = modtime
Hi, I would like to ask for help with following problem: We have SH cluster (3 nodes) and IDX cluster (3 nodes). We upgraded it from 8.0.9 to 8.1.6 because of EOS of 8.0 version. Everything looks fi... See more...
Hi, I would like to ask for help with following problem: We have SH cluster (3 nodes) and IDX cluster (3 nodes). We upgraded it from 8.0.9 to 8.1.6 because of EOS of 8.0 version. Everything looks fine, except one thing - sometimes this happens: I run a search. The search starts, but after a while it stucks (on the line below the place for entering the SPL query, the number of events stops) and after cca 5 minutes the search ends with an error message "Streamed search execute failed because: Error in 'lookup' command: Failed to re-open lookup file: '/srv/app/int/secmon/splunk/var/run/searchpeers/08270BDA-BE03-4A78-8C6C-95A9CE10BB8D-1633508003/kvstore_s_SA-IdeRjww0FotymhlCIaS1cqkc05a_assetsXy0Y9f6F5lMW4rOy8KLC@P22'" It happens completely randomly, does not matter what data I search for. Sometimes this message is generated by only 1 IDX node, sometimes by 2, sometimes by all 3 nodes in IDX cluster. Error message is always exactly the same (except the part "1633508003", which is time of search). Sometimes I get partial results (some events returned), sometimes not (0 events returned). Before upgrade there was no message like this. Could someone help with this? Is it related to the upgrade? And how to fix it? I tried to search through Splunk Community, google around, but did not find anything useful... Thanks in advance. Lukas Mecir
Hi how can I calculate percentage of a each ErrorCode field by servername? here is the spl: index="my_index" | rex field=source "\/log\.(?<servername>\w+)." | rex "Err\-ErrorCode\[(?<ErrorCode>\... See more...
Hi how can I calculate percentage of a each ErrorCode field by servername? here is the spl: index="my_index" | rex field=source "\/log\.(?<servername>\w+)." | rex "Err\-ErrorCode\[(?<ErrorCode>\d+)" expected output: Servername     ErrorCode      Percentage  server1             404                    50%                              500                    40%                              200                    10% server2             500                    50%                              404                    45%                              200                    5% …   any idea?  Thanks 
Hi Splunkers,   1. We are upgrading splunk version from 7.3.4 to 8.1.X. But can someone help to get the exact stable version between 8.1.X. Please assist us. Thanks, Abhijeet B.
Citry contains 12 names. in result i am able to see only city name with product if product is zero it is not showing the Citry name base search |stats count(product) AS Total BY City |fillnull va... See more...
Citry contains 12 names. in result i am able to see only city name with product if product is zero it is not showing the Citry name base search |stats count(product) AS Total BY City |fillnull value=0 City Citry Total citry1 1 citry5 50 citry10 15 expectation  Citry Total citry1 1 citry2 0 citry3 0 citry4 0 citry5 50 citry6 0 citry7 0 citry8 0 citry9 0 citry10 15 citry11 0 citry12 0
Hi i have uploaded a CSV file and would like to know if it is possible to only display the content in the file? Feature Business  Environment Bench Secs  Grade Risk Set offs Production ... See more...
Hi i have uploaded a CSV file and would like to know if it is possible to only display the content in the file? Feature Business  Environment Bench Secs  Grade Risk Set offs Production 300 10 Ops Count UAT 500 11
Hi All, I know the topic is quite extensively documented in several posts within splunk community but I could not really figure out what is best to apply in the case below. some context about ... See more...
Hi All, I know the topic is quite extensively documented in several posts within splunk community but I could not really figure out what is best to apply in the case below. some context about architecture in use We have basically 3 layers (so index) which a FE call goes through: apigee  mise (microservices) eCommerce applicaiton In other words, a call initiated by FE is routed to apigee from where it goes to some microservice which in turn might call the eCommerce application. The calls are uniquely identified by a requestId so given an apigee call I can get exactly which are the related calls to eCommerce because of the requestId. Splunk dashboard I'm building a dashboard where: panel 1: I list all apigee calls grouped by common url to get some stats out out of it (so far so good). Something like:   > oce/soe/orders/v1/*/billingProfile (normalized url), then display the columns: count, avg(duration), perc90(duration) > oce/soe/orders/v1/*/delivery-method (normalized url), then display the columns: count, avg(duration), perc90(duration) > ... panel 2: given the focus on an one apigee "normalized" call from panel one I list all related eCommerce calls. The goal is to grab some stats over such ep calls, grouping those by common urls and then average duration and taking perc90. To make this working I do use a join but it's very slow. Something like, given the focus on billingProfile apigee call above: > ecommerceapp/billinginfo/store/*/ (normalized url),  then display the columns: count, avg(duration), perc90(duration)   I want to ask if you see any other way to reach the same goal without a join or if you have any generally hint to improve the performance.  Thanks in advance, Vincenzo   I report below the index search I'm currently using highlighting the part of interest:           index=ecommerce_prod (namespace::prod-onl) (earliest="$timeRange1.earliest$" latest="$timeRange1.latest$") "Status=20*" | where isnotnull( SCS_Request_ID) | rex field=_raw "(?<operation>(?<![\w\d])(GET|POST|PUT|DELETE)(?![\w\d]))" | rex field=_raw "(?<=[/^(POST|GET|PUT|DELETE)$/] )(?<service>[\/a-zA-Z\.].+?(?=HTTP))" | rex field=_raw "(?<=Duration=)(?<duration>[0-9]*)" | eval temp=split(service,"/") | eval field1=mvindex(temp,1) | eval field2=mvindex(temp,2) | eval field3=mvindex(temp,3) | eval field4=mvindex(temp,4) | eval field4=if(like(field4,"%=%") OR like(field4,"%?%"), "*", field4) | eval field5=mvindex(temp,5)| eval field5=if(like(field5, "%=%") OR like(field5,"%?%"), "*", field5) | eval url_short=if(isnull(field5),field1."/".field2."/".field3."/".field4."/", field1."/".field2."/".field3."/".field4."/".field5) | eval fullName = operation." ".url_short | table SCS_Request_ID, operation, url_short, duration | join SCS_Request_ID [ search index="apigee" (earliest="$timeRange1.earliest$" latest="$timeRange1.latest$") status="20*" | rename tgrq_h_scs-request-id as SCS_Request_ID | table SCS_Request_ID | where isnotnull( SCS_Request_ID) ] | stats count, avg(duration) as avg_, perc90(duration) as perc90_ by operation, url_short | eval avg_=round(avg_,2) | eval perc90_=round(perc90_,2)<div> </div>          
Dear Splunk community, I am using rex to extract data from _raw and put it into new fields like so:     [10/5/21 23:02:25:134 CEST] 00000063 SystemOut O 05 Oct 2021 23:02:25:133 [INFO] [CRONS... See more...
Dear Splunk community, I am using rex to extract data from _raw and put it into new fields like so:     [10/5/21 23:02:25:134 CEST] 00000063 SystemOut O 05 Oct 2021 23:02:25:133 [INFO] [CRONSERVER] [CID-MXSCRIPT-1673979] SCRIPTNAME - 00 - Function:httpDiscovery(POST, https, host, /call, BASE64ENC(USER:PASSWORD)) Profile = MYPROFILE - Scope = MYHOSTNAME - End - Result(strResponseStatus, stResponseReason, strResponseData)=([200], [OK], [{"message":"SUCCESS"}{"runId":"2021100523022485"} ]) | rex field=_raw "Scope = (?<fqdn>\S*)" | rex field=_raw "Profile = (?<profile>\S*)"     This will create new fields and also show _raw. I don't want _raw to show, but if I use this:     | table _time     Instead of this:     | table _time, _raw,     The fields that I create will no longer show, so I have to include _raw aswell. I can use mode=sed when using rex to delete data from _raw and for example only keep profile and then rename _raw to profile, but I don't have any experience using sed and I would prefer a easier way. My question: Is it possible to hide _raw and still use rex on _raw to create new fields?   Thanks.  
Hi All, I am trying to merge  the rows of a column into one row for the below table: App_Name Country Last_Deployed Temp_Version com.citiao.cimainproject China 2021-09-24 13:30:04.39 1.0.12.... See more...
Hi All, I am trying to merge  the rows of a column into one row for the below table: App_Name Country Last_Deployed Temp_Version com.citiao.cimainproject China 2021-09-24 13:30:04.39 1.0.12.20210907193849359 com.citiao.cimainproject HongKong 2021-09-24 11:48:15.176 1.0.12.20210907193849359 com.citiao.cimainproject Indonesia 2021-09-10 13:17:38.254 1.0.12.20210907193849359 com.citiao.cimainproject Malaysia 2021-09-10 14:54:54.098 1.0.12.20210907193849359 com.citiao.cimainproject Philippines 2021-09-24 11:58:44.034 1.0.12.20210907193849359 com.citiao.cimainproject Singapore 2021-09-10 12:53:25.539 1.0.12.20210907193849359 com.citiao.cimainproject Thailand 2021-09-24 14:01:09.682 1.0.12.20210907193849359 com.citiao.cimainproject Vietnam 2021-09-10 15:00:06.598 1.0.12.20210907193849359   I used the query as below: my query | stats values(App_Temp_Name) as App_Name latest(LAST_DEPLOYED) as Last_Deployed latest(APP_TEMP_VER) as Temp_Version by Country | table App_Name,Country,Last_Deployed,Temp_Version But I need to merge the rows of the column App_Name as one row keeping others as it is like: App_Name Country Last_Deployed Temp_Version com.citiao.cimainproject China 2021-09-24 13:30:04.39 1.0.12.20210907193849359   HongKong 2021-09-24 11:48:15.176 1.0.12.20210907193849359   Indonesia 2021-09-10 13:17:38.254 1.0.12.20210907193849359   Malaysia 2021-09-10 14:54:54.098 1.0.12.20210907193849359   Philippines 2021-09-24 11:58:44.034 1.0.12.20210907193849359   Singapore 2021-09-10 12:53:25.539 1.0.12.20210907193849359   Thailand 2021-09-24 14:01:09.682 1.0.12.20210907193849359   Vietnam 2021-09-10 15:00:06.598 1.0.12.20210907193849359 Please help me modify the query to get the desired output.   Thank you very much..!!
Hi ,  I am trying to get the day wise error count by data message only if the yesterdays error count is more than 50 . index="eshop" NOT(index=k8*dev OR index=k8*test) tag=error | eval time=strfti... See more...
Hi ,  I am trying to get the day wise error count by data message only if the yesterdays error count is more than 50 . index="eshop" NOT(index=k8*dev OR index=k8*test) tag=error | eval time=strftime(_time,"%Y-%m-%d") |table time,data.message <condition if the previous day data message count is less than 50 then it should be ignored from the stats> | stats count by time,data.message
I am looking for O365 use cases related to MS teams, Sharepoint, Exchange , One drive, Currently data is populate in Azure and need to ingest use cases in Splunk.   What kind of use cases i can cre... See more...
I am looking for O365 use cases related to MS teams, Sharepoint, Exchange , One drive, Currently data is populate in Azure and need to ingest use cases in Splunk.   What kind of use cases i can create based on these data sources MS teams, Sharepoint, Exchange , One drive. Also looking for Malicious and threat level of  O365 use cases .   Please suggest
Hi i'm looking to use a heavy forwarder to append a string to specific log messages. Im following the guide here https://docs.splunk.com/Documentation/Splunk/8.2.2/Data/Anonymizedata (specifically th... See more...
Hi i'm looking to use a heavy forwarder to append a string to specific log messages. Im following the guide here https://docs.splunk.com/Documentation/Splunk/8.2.2/Data/Anonymizedata (specifically the "Anonymize data with a regular expression transform" part)which only seems to mask data, i dont want to alter the log entry as such but rather add something like "<Review Required>" to the end of the log that matches a specific regex. Can this be done using the heavy forwarder and transforms.conf?
Hello, I don't find solution here and I managed to get it  to work. First of all, if you want separate in many dashboards your seach you can do that. index="_internal" | timechart count by sourcet... See more...
Hello, I don't find solution here and I managed to get it  to work. First of all, if you want separate in many dashboards your seach you can do that. index="_internal" | timechart count by sourcetype You can activate trellis by sourcetype. But in each graph you want status (by exemple). Please try this query : index="_internal" | bin _time | stats count by _time, sourcetype, status | eval {status}=count | fields - status, count | fillnull value=0 | stats sum(*) as * by _time, sourcetype
Hello, I want to set up AppDynamics for my department. I logged into AppDynamics but the set-up for integrating sample application (from your GitHub repo) is too complex and nested. An applicat... See more...
Hello, I want to set up AppDynamics for my department. I logged into AppDynamics but the set-up for integrating sample application (from your GitHub repo) is too complex and nested. An application requires another application as a pre-requisite and the chain goes on. Is there anyone available for a demonstration for setting up any sample application so that I can look around the full features of the application to integrate AppDynamics in my organization? 
Hello, As per ES official documentation, it says below threat intel feeds are enabled by default.  Mozilla Public Suffix List MITRE ATT&CK Framework ICANN Top-level Domains List In a... See more...
Hello, As per ES official documentation, it says below threat intel feeds are enabled by default.  Mozilla Public Suffix List MITRE ATT&CK Framework ICANN Top-level Domains List In addition it also mentions these are  included   But when i check in our ES app settings >> Threat Intel management page, i see only 3 feeds as below.  Where are those default feeds mentioned above ?  
We have TA for Splunk and are using Splunk's internal library(splunk.entity) to fetch the credentials from passwords.conf file using the below code.    On some Splunk enterprise instances below cod... See more...
We have TA for Splunk and are using Splunk's internal library(splunk.entity) to fetch the credentials from passwords.conf file using the below code.    On some Splunk enterprise instances below code is working properly and returning username and clear password. But on some Splunk enterprise instances and Splunk cloud, it has been observed that the value of ['eai:acl']['app'] is Null due to which exception has been raised by the code.   We would like to know why some Splunk instances return the Null value for ['eai:acl']['app'].   Code -  import splunk.entity as entity myapp = 'APP-NAME' realm = 'APP-NAME-Api'   try:     entities = entity.getEntities(['admin', 'passwords'], namespace=myapp, owner='nobody', sessionKey=session_key)     except Exception as e:     raise Exception("Could not get %s credentials from Splunk. Error: %s" % (myapp, str(e)))   for i, c in list(entities.items()):     if c['eai:acl']['app'] == myapp and c['realm'] == realm:         return c['username'], c['clear_password']   raise Exception("No credentials found.") We also tried with the below CURL command but the field ['eai:acl']['app'] is missing in the response. curl -k -u <splunk_username>:<splunk_password> https://<host>:8089/services/storage/passwords    
Meet Hiroki.Ito, AppDynamics Support Engineer, in our first Staff Edition of the Community Member Spotlight series. He shares his professional journey so far, his problem-solving mindset and methods,... See more...
Meet Hiroki.Ito, AppDynamics Support Engineer, in our first Staff Edition of the Community Member Spotlight series. He shares his professional journey so far, his problem-solving mindset and methods, what inspires him, and more. — Ryan  Contents A day in the life of Hiroki.Ito AppD in your work Staying in the know Life after hours Inspiration and Insights Hiroki Ito, Support Engineer at AppDynamics A day in the life of Hiroki.Ito What’s a typical work day like for you here at AppD? I am a Support Engineer, responsible for answering queries and troubleshooting AppDynamics customer issues. I am part of the Support team in Japan, so my primary role is dealing with queries from customers in Japan, though my team also deals with global ones. Each query is unique, and I usually research, analyze logs, and replicate issues to resolve customer issues. Other than that, I usually check this Community to see if there are posts I can answer, or to try to learn something new to share with my team. What has your journey into the field been like? Initially I worked as a software engineer in the financial industry. I maintained systems related to foreign currency exchange. Then I moved to a different company more focused on IT. There I learned a lot regarding technical skills such as Java, Linux, AWS and experienced a full cycle of system development. I had a chance to work on six different projects/systems. After that, I had the opportunity to join AppDynamics as a Support Engineer last May. What feeds your interest in your work? It is interesting that I can learn a lot of new things from my work. There are many kinds of queries from customers, and they are very different from each other. AppDynamics has various kinds of features and supports various environments. There are many features I’m not familiar with yet. But by investigating issues, by checking logs and replicating the local environment, I learn more details about those features. Customers also use AppDynamics in various kinds of environments, so I learn those environments too.  AppD in your work Have you learned anything interesting about how different customers use AppDynamics to achieve their goals? I found it interesting that some customers use AppDynamics API tools like Dexter to retrieve data from AppDynamics and analyze it. AppDynamics can collect valuable data, especially using analytics, and AppDynamics API allows customers to retrieve the data systematically so that they can analyze it in a systematic way. What kind of experiences have you had with the AppD Community? The AppDynamics Community is public, and anyone can see the posts. I think posts are very valuable, so that customers who encounter the same issue can look for Community posts and resolve problems by themselves. Knowledge Base articles are useful because I often look at them while investigating customer’s issues, or share them with customers when I request additional investigation. What are your top 2 AppDynamics hot tips? I personally think Live Preview and downloading debug logs from the Controller UI are 2 hot tips. Live Preview allows customers to inspect the live data streaming of an application. When configuring custom match rules, Business transactions can be undetected for different reasons, but the Live Preview feature allows changing the configuration in real time and shows what kinds of business transactions to be detected, so it is very useful. Customers can download debug logs of app or database Agents from the Controller UI. Debug logs are very useful when there are problems in AppDynamics. Debug logs can be also obtained by changing Agent configurations, but this method requires agent restart, which may take too much time or otherwise not be acceptable in some situations. Downloading from the Controller is much easier and faster to do. What self-help issues do you notice customers experience most frequently? Some errors are already addressed in AppDynamics documentation or in the Community. So if you find an error listed in a log, searching that error can be helpful. In addition, customers often ask the IP range of SaaS Controllers, but (for the SaaS subscription) you can see the IP range from My AppDynamics Account > View Details on the Actions column. Staying in the know  What’s your best way of keeping up with industry news? I usually check Twitter, Reddit, and IT news media such as TechCrunch to see the latest information. What have you learned in the past year that you wish you had known when you started your career? I wish I had known more about the cloud, like AWS. Learning by doing is so important in learning IT related skills. However, it may not be easy to prepare hardware such as Linux servers or databases by oneself. With the cloud, I can easily set up the environments I want, so I can learn more efficiently. Life after-hours What are some of your favorite things to experience outside of work? Outside of work, I enjoy watching Sumo—a Japanese wrestling tournament where wrestlers try to force each other out of a circular ring or touch the ground—and reading Manga. Inspiration and Insights How—or where—do you find inspiration? I often find inspiration when I am very relaxed. When there are many things I have to worry about, I usually go for a walk or sleep so that my thoughts are organized.  What advice would you give someone who is up and coming in your field of work? It is important to try working on something that looks unfeasible at first glance. Repeating that may significantly enhance skills. Failing is fine, but not trying and always working on easy things may not be good.
I'm getting below error for the TA-defender-atp-hunting on our HF. Unable to initialize modular input "defender_hunting_query" defined in the app "TA-defender-atp-hunting": Introspecting scheme=defe... See more...
I'm getting below error for the TA-defender-atp-hunting on our HF. Unable to initialize modular input "defender_hunting_query" defined in the app "TA-defender-atp-hunting": Introspecting scheme=defender_hunting_query: script running failed (exited with code 1).. splunkd logs ERROR ScriptRunner - stderr from '/opt/splunk/bin/python3.7 /opt/splunk/bin/runScript.py setup': The script at path=/opt/splunk/etc/apps/TA-defender-atp-hunting/bin/TA_defender_atp_hunting_rh_defender_hunting_query.py has thrown an exception=Traceback (most recent call last) 10-06-2021 03:20:48.823 +0000 ERROR ScriptRunner - stderr from '/opt/splunk/bin/python3.7 /opt/splunk/bin/runScript.py setup':File "/opt/splunk/bin/runScript.py", line 82, in <module> 10-06-2021 03:20:48.823 +0000 ERROR ScriptRunner - stderr from '/opt/splunk/bin/python3.7 /opt/splunk/bin/runScript.py setup':exec(open(REAL_SCRIPT_NAME).read()) 10-06-2021 03:20:48.823 +0000 ERROR ScriptRunner - stderr from '/opt/splunk/bin/python3.7 /opt/splunk/bin/runScript.py setup':File "<string>", line 4, in <module> 10-06-2021 03:20:48.823 +0000 ERROR ScriptRunner - stderr from '/opt/splunk/bin/python3.7 /opt/splunk/bin/runScript.py setup':File "/opt/splunk/etc/apps/TA-defender-atp-hunting/bin/ta_defender_atp_hunting/splunktaucclib/rest_handler/endpoint/validator.py", line 388 10-06-2021 03:20:48.823 +0000 ERROR ScriptRunner - stderr from '/opt/splunk/bin/python3.7 /opt/splunk/bin/runScript.py setup':except ValueError, exc: 10-06-2021 03:20:48.823 +0000 ERROR ScriptRunner - stderr from '/opt/splunk/bin/python3.7 /opt/splunk/bin/runScript.py setup': ^ 10-06-2021 03:20:48.823 +0000 ERROR ScriptRunner - stderr from '/opt/splunk/bin/python3.7 /opt/splunk/bin/runScript.py setup':SyntaxError: invalid syntax 10-06-2021 03:20:48.824 +0000 ERROR ScriptRunner - stderr from '/opt/splunk/bin/python3.7 /opt/splunk/bin/runScript.py setup':Traceback (most recent call last): 10-06-2021 03:20:48.824 +0000 ERROR ScriptRunner - stderr from '/opt/splunk/bin/python3.7 /opt/splunk/bin/runScript.py setup':File "/opt/splunk/bin/runScript.py", line 82, in <module> 10-06-2021 03:20:48.824 +0000 ERROR ScriptRunner - stderr from '/opt/splunk/bin/python3.7 /opt/splunk/bin/runScript.py setup':exec(open(REAL_SCRIPT_NAME).read()) 10-06-2021 03:20:48.824 +0000 ERROR ScriptRunner - stderr from '/opt/splunk/bin/python3.7 /opt/splunk/bin/runScript.py setup':File "<string>", line 4, in <module> 10-06-2021 03:20:48.824 +0000 ERROR ScriptRunner - stderr from '/opt/splunk/bin/python3.7 /opt/splunk/bin/runScript.py setup':File "/opt/splunk/etc/apps/TA-defender-atp-hunting/bin/ta_defender_atp_hunting/splunktaucclib/rest_handler/endpoint/validator.py", line 388 10-06-2021 03:20:48.824 +0000 ERROR ScriptRunner - stderr from '/opt/splunk/bin/python3.7 /opt/splunk/bin/runScript.py setup':except ValueError, exc: 10-06-2021 03:20:48.824 +0000 ERROR ScriptRunner - stderr from '/opt/splunk/bin/python3.7 /opt/splunk/bin/runScript.py setup': ^ 10-06-2021 03:20:48.824 +0000 ERROR ScriptRunner - stderr from '/opt/splunk/bin/python3.7 /opt/splunk/bin/runScript.py setup':SyntaxError: invalid syntax 10-06-2021 03:20:48.828 +0000 ERROR AdminManagerExternal - External handler failed with code '1' and output: ''. See splunkd.log for stderr output. I'm not able to access the defender atp hunting app via UI. Would anyone know how to resolve this issue?  Thanks in advance!